text
stringlengths 17
362k
| id
stringlengths 13
115
| metadata
dict | __index_level_0__
int64 0
75
|
---|---|---|---|
<component name="ProjectCodeStyleConfiguration">
<state>
<option name="PREFERRED_PROJECT_CODE_STYLE" value="Default" />
</state>
</component>
| ivy/.idea/codeStyles/codeStyleConfig.xml/0 | {
"file_path": "ivy/.idea/codeStyles/codeStyleConfig.xml",
"repo_id": "ivy",
"token_count": 51
} | 0 |
# How to Contribute
You can pick an open issue to contribute from our [ToDo list issues](https://github.com/unifyai/ivy/issues?q=is%3Aopen+is%3Aissue+label%3AToDo), which is the placeholder of our subtasks.
Please, follow the next process when you work on your subtask:
## Steps
1. **Choosing a Task:**
- Choose a task to work on which:
- is not marked as completed with a tick.
- does not have an issue created.
- is not mentioned in the comments.
Currently, there are three open tasks:
- [Function Reformatting](https://unify.ai/docs/ivy/overview/contributing/open_tasks.html#function-formatting)
- [Frontend APIs](https://unify.ai/docs/ivy/overview/contributing/open_tasks.html#frontend-apis)
- [Ivy Experimental API](https://unify.ai/docs/ivy/overview/contributing/open_tasks.html#ivy-experimental-api)
2. **Create Issue:**
- Create a new issue with the title being just the name of the sub-task you would like to work on.
3. **Comment on the ToDo List:**
- Comment on the ToDo list issue with a reference to your new issue like so: `- [ ] #Issue_number`. For example, if your issue number is 12345, then the text of your comment should be `- [ ] #12345`. You could also use just the issue number (`#12345`), or a link to the issue itself (`https://github.com/unifyai/ivy/issues/12345`).
- At some point after your comment is made, your issue will automatically be added to the ToDo list and the comment will be deleted. No need to wait for this to happen before progressing to the next stage. Don’t comment anything else on these ToDo issues.
4. **Start Working:**
- When you have finished PR or need help open the PR make sure to follow our PR template.
5. **Review Process:**
- Wait for us to review your PR. Please be patient, our engineers will look into your PR based on the queue we have, no need to ping them.
- Every time you respond to our requested changes you must re-request a review in order for us to re-engage with the PR.
- Once the PR is in good shape, we will merge into main, and then you become an Ivy contributor!
### Important Notes
- If your PR is not created within 7 days of creating the issue, then a warning message will appear on the issue, we do this in order to keep our ToDo lists moving quickly,
- Please don't take it personally if your issue or PR gets closed because of this 7-day inactivity time limit.
- Finally, we limit the maximum number of open and incomplete sub-task issues to three per person.
Feel free to watch the next video:
[![Video](https://img.youtube.com/vi/wBKTOGmwfbo/0.jpg)](https://www.youtube.com/embed/wBKTOGmwfbo)
For questions, please reach out on [discord](https://discord.gg/MDK979Ga) in the [todo list issues thread](https://discord.com/channels/799879767196958751/1189903501011202128)!
| ivy/CONTRIBUTING.md/0 | {
"file_path": "ivy/CONTRIBUTING.md",
"repo_id": "ivy",
"token_count": 849
} | 1 |
#!/bin/bash
docker build --progress=plain -t unifyai/multiversion:base -f MultiversionDockerFile ..
| ivy/docker/build_multiversiondockerfile.sh/0 | {
"file_path": "ivy/docker/build_multiversiondockerfile.sh",
"repo_id": "ivy",
"token_count": 39
} | 2 |
.. title:: Home
.. include:: ../README.md
:parser: myst_parser.sphinx_
.. toctree::
:hidden:
:maxdepth: -1
Home <self>
.. toctree::
:hidden:
:maxdepth: -1
:caption: The Basics
overview/get_started.rst
demos/quickstart.ipynb
.. toctree::
:hidden:
:maxdepth: -1
:caption: Demos
demos/learn_the_basics.rst
demos/guides.rst
demos/examples_and_demos.rst
.. toctree::
:hidden:
:maxdepth: -1
:caption: Background
overview/motivation.rst
overview/related_work.rst
.. toctree::
:hidden:
:maxdepth: -1
:caption: Contributors
overview/design.rst
overview/contributing.rst
overview/volunteer_ranks.rst
overview/deep_dive.rst
overview/glossary.rst
overview/faq.rst
.. toctree::
:hidden:
:maxdepth: -1
:caption: API Reference
overview/one_liners.rst
.. autosummary::
:toctree: docs/functional
:template: top_functional_toc.rst
:recursive:
:hide-table:
ivy.functional.ivy
.. autosummary::
:toctree: docs/data_classes
:template: top_data_toc.rst
:recursive:
:hide-table:
ivy.data_classes
.. autosummary::
:toctree: docs
:template: top_ivy_toc.rst
:recursive:
:hide-table:
ivy.stateful
ivy.utils
ivy_tests.test_ivy.helpers
| ivy/docs/index.rst/0 | {
"file_path": "ivy/docs/index.rst",
"repo_id": "ivy",
"token_count": 514
} | 3 |
Containers
==========
.. _`ivy.Container`: https://github.com/unifyai/ivy/blob/b725ed10bca15f6f10a0e5154af10231ca842da2/ivy/container/container.py#L52
.. _`dict`: https://github.com/unifyai/ivy/blob/b725ed10bca15f6f10a0e5154af10231ca842da2/ivy/container/base.py#L51
.. _`ivy.Container.cont_map`: https://github.com/unifyai/ivy/blob/b725ed10bca15f6f10a0e5154af10231ca842da2/ivy/container/base.py#L3070
.. _`ivy.Container.cont_all_true`: https://github.com/unifyai/ivy/blob/b725ed10bca15f6f10a0e5154af10231ca842da2/ivy/container/base.py#L1592
.. _`ivy.Container.cont_to_iterator`: https://github.com/unifyai/ivy/blob/b725ed10bca15f6f10a0e5154af10231ca842da2/ivy/container/base.py#L2043
.. _`ContainerBase`: https://github.com/unifyai/ivy/blob/b725ed10bca15f6f10a0e5154af10231ca842da2/ivy/container/base.py#L51
.. _`ivy.Container.cont_multi_map`: https://github.com/unifyai/ivy/blob/b725ed10bca15f6f10a0e5154af10231ca842da2/ivy/container/base.py#L623
.. _`ivy.Container.cont_diff`: https://github.com/unifyai/ivy/blob/b725ed10bca15f6f10a0e5154af10231ca842da2/ivy/container/base.py#L427
.. _`ivy.Container.cont_common_key_chains`: https://github.com/unifyai/ivy/blob/b725ed10bca15f6f10a0e5154af10231ca842da2/ivy/container/base.py#L741
.. _`ivy.Container.cont_multi_map_in_function`: https://github.com/unifyai/ivy/blob/b725ed10bca15f6f10a0e5154af10231ca842da2/ivy/container/base.py#L162
.. _`ivy.Container.tan`: https://github.com/unifyai/ivy/blob/b725ed10bca15f6f10a0e5154af10231ca842da2/ivy/container/elementwise.py#L7347
.. _`ivy.Container.roll`: https://github.com/unifyai/ivy/blob/b725ed10bca15f6f10a0e5154af10231ca842da2/ivy/container/manipulation.py#L927
.. _`instance method is added`: https://github.com/unifyai/ivy/blob/b725ed10bca15f6f10a0e5154af10231ca842da2/ivy/__init__.py#L683
.. _`inherits`: https://github.com/unifyai/ivy/blob/b725ed10bca15f6f10a0e5154af10231ca842da2/ivy/container/container.py#L52
.. _`ContainerWithElementwise`: https://github.com/unifyai/ivy/blob/b725ed10bca15f6f10a0e5154af10231ca842da2/ivy/container/elementwise.py#L9
.. _`__repr__`: https://github.com/unifyai/ivy/blob/b725ed10bca15f6f10a0e5154af10231ca842da2/ivy/container/base.py#L3629
.. _`__getattr__`: https://github.com/unifyai/ivy/blob/b725ed10bca15f6f10a0e5154af10231ca842da2/ivy/container/base.py#L3860
.. _`__setattr__`: https://github.com/unifyai/ivy/blob/b725ed10bca15f6f10a0e5154af10231ca842da2/ivy/container/base.py#L3882
.. _`__getitem__`: https://github.com/unifyai/ivy/blob/b725ed10bca15f6f10a0e5154af10231ca842da2/ivy/container/base.py#L3934
.. _`__setitem__`: https://github.com/unifyai/ivy/blob/b725ed10bca15f6f10a0e5154af10231ca842da2/ivy/container/base.py#L3976
.. _`__contains__`: https://github.com/unifyai/ivy/blob/b725ed10bca15f6f10a0e5154af10231ca842da2/ivy/container/base.py#L3996
.. _`__getstate__`: https://github.com/unifyai/ivy/blob/b725ed10bca15f6f10a0e5154af10231ca842da2/ivy/container/base.py#L4004
.. _`__setstate__`: https://github.com/unifyai/ivy/blob/b725ed10bca15f6f10a0e5154af10231ca842da2/ivy/container/base.py#L4019
.. _`implemented`: https://github.com/unifyai/ivy/blob/b725ed10bca15f6f10a0e5154af10231ca842da2/ivy/container/container.py#L133
.. _`__add__`: https://github.com/unifyai/ivy/blob/b725ed10bca15f6f10a0e5154af10231ca842da2/ivy/container/container.py#L191
.. _`__sub__`: https://github.com/unifyai/ivy/blob/b725ed10bca15f6f10a0e5154af10231ca842da2/ivy/container/container.py#L290
.. _`__mul__`: https://github.com/unifyai/ivy/blob/b725ed10bca15f6f10a0e5154af10231ca842da2/ivy/container/container.py#L389
.. _`__truediv__`: https://github.com/unifyai/ivy/blob/b725ed10bca15f6f10a0e5154af10231ca842da2/ivy/container/container.py#L399
.. _`repo`: https://github.com/unifyai/ivy
.. _`discord`: https://discord.gg/sXyFF8tDtm
.. _`containers thread`: https://discord.com/channels/799879767196958751/1189906066549506048
The `ivy.Container`_ inherits from `dict`_, and is useful for storing nested data.
For example, the container is equally suitable for storing batches of training data, or for storing the weights of a network.
The methods of the :class:`ivy.Container` class are more varied than those of the :class:`ivy.Array`.
All methods of the :class:`ivy.Array` are instance methods, and almost all of them directly wrap a function in the functional API.
For the :class:`ivy.Container`, there are also methods which are specific to the container itself, for performing nested operations on the leaves of the container for example.
Overall, this results in the following three mutually exclusive groups of :class:`ivy.Container` methods.
Each of these are explained in the following sub-sections.
#. Container instance methods
#. API instance methods
#. API special methods
Container Instance Methods
--------------------------
Container instance methods are methods which are specific to the container itself.
A few examples include `ivy.Container.cont_map`_ which is used for mapping a function to all leaves of the container, `ivy.Container.cont_all_true`_ which determines if all container leaves evaluate to boolean `True`, and `ivy.Container.cont_to_iterator`_ which returns an iterator for traversing the leaves of the container.
There are many more examples, check out the abstract `ContainerBase`_ class to see some more!
API Instance Methods
--------------------
The *API* instance methods serve a similar purpose to the instance methods of the :class:`ivy.Array` class.
They enable functions in Ivy's functional API to be called as instance methods on the :class:`ivy.Container` class.
The difference is that with the :class:`ivy.Container`, the API function is applied recursively to all the leaves of the container.
The :class:`ivy.Container` instance methods should **exactly match** the instance methods of the :class:`ivy.Array`, both in terms of the methods implemented and the argument which :code:`self` replaces in the function being called.
This means :code:`self` should always replace the first array argument in the function.
`ivy.Container.add <https://github.com/unifyai/ivy/blob/1dba30aae5c087cd8b9ffe7c4b42db1904160873/ivy/container/elementwise.py#L158>`_ is a good example.
However, as with the :class:`ivy.Array` class, it's important to bear in mind that this is *not necessarily the first argument*, although in most cases it will be.
We also **do not** set the :code:`out` argument to :code:`self` for instance methods.
If the only array argument is the :code:`out` argument, then we do not implement this instance method.
For example, we do not implement an instance method for `ivy.zeros <https://github.com/unifyai/ivy/blob/1dba30aae5c087cd8b9ffe7c4b42db1904160873/ivy/functional/ivy/creation.py#L116>`_.
As is the case for :class:`ivy.Array`, the organization of these instance methods follows the same organizational structure as the files in the functional API.
The :class:`ivy.Container` class `inherits`_ from many category-specific array classes, such as `ContainerWithElementwise`_, each of which implements the category-specific instance methods.
As with :class:`ivy.Array`, given the simple set of rules which underpin how these instance methods should all be implemented, if a source-code implementation is not found, then this `instance method is added`_ programmatically. This serves as a helpful backup in cases where some instance methods are accidentally missed out.
Again, the benefit of the source code implementations is that this makes the code much more readable, with important methods not being entirely absent from the code.
It also enables other helpful perks, such as auto-completions in the IDE etc.
API Special Methods
--------------------
All non-operator special methods are implemented in `ContainerBase`_, which is the abstract base class for all containers.
These special methods include `__repr__`_ which controls how the container is printed in the terminal, `__getattr__`_ that primarily enables keys in the underlying :code:`dict` to be queried as attributes, whereas if no attribute, item or method is found which matches the name provided on the container itself, then the leaves will also be recursively traversed, searching for the attribute.
If it turns out to be a callable function on the leaves, then it will call the function on each leaf and update the leaves with the returned results, for a more detailed explanation with examples, see the code block below.
`__setattr__`_ that enables attribute setting to update the underlying :code:`dict`, `__getitem__`_ that enables the underlying :code:`dict` to be queried via a chain of keys, `__setitem__`_ that enables the underlying :code:`dict` to be set via a chain of keys, `__contains__`_ that enables us to check for chains of keys in the underlying :code:`dict`, and `__getstate__`_ and `__setstate__`_ which combined enable the container to be pickled and unpickled.
.. code-block:: python
x = ivy.Container(a=ivy.array([0.]), b=ivy.Container(a=ivy.array([[0.]]), b=ivy.array([1., 2., 3.])))
print(x.shape)
{
a: [
1
],
b: {
a: [
1,
1
],
b: [
3
]
}
}
print(x.ndim)
{
a: 1,
b: {
a: 2,
b: 1
}
}
num_dims = x.shape.__len__()
print(num_dims)
{
a: 1,
b: {
a: 2,
b: 1
}
}
print(len(x.shape))
# doesn't work because Python in low-level C has a restriction on the return type of `len` to be `int`
print(num_dims.real)
{
a: 1,
b: {
a: 2,
b: 1
}
}
print(bin(num_dims))
# doesn't work because some Python built-in functions have enforcement on input argument types
# external method flexibility enables positional and keyword arguments to be passed into the attribute
y = ivy.Container(l1=[1, 2, 3], c1=ivy.Container(l1=[3, 2, 1], l2=[4, 5, 6]))
print(y.__getattr__("count", 1))
{
c1: {
l1: 1,
l2: 0
},
l1: 1
}
print(y.count(1))
# doesn't work since essentially the argument 1 won't be passed to `__getattr__`
print(y.__getattr__("__add__", [10]))
{
c1: {
l1: [
3,
2,
1,
10
],
l2: [
4,
5,
6,
10
]
},
l1: [
1,
2,
3,
10
]
}
As for the special methods which are `implemented`_ in the main :class:`ivy.Container` class, they all make calls to the corresponding standard operator functions.
As a result, the operator functions will make use of the special methods of the lefthand passed input objects if available, otherwise it will make use of the reverse special method of the righthand operand.
For instance, if the lefthand operand at any given leaf of the container in an :class:`ivy.Array`, then the operator function will make calls to the special methods of this array object.
As explained in the `Arrays <arrays.rst>`_ section of the Deep Dive, these special methods will in turn call the corresponding functions from the ivy functional API.
Examples include `__add__`_, `__sub__`_, `__mul__`_ and `__truediv__`_ which will make calls to :func:`ivy.add`, :func:`ivy.subtract`, :func:`ivy.multiply` and :func:`ivy.divide` respectively if the lefthand operand is an :class:`ivy.Array` object.
Otherwise, these special methods will be called on whatever objects are at the leaves of the container, such as int, float, :class:`ivy.NativeArray` etc.
Nestable Functions
------------------
As introduced in the `Function Types <function_types.rst>`_ section, most functions in Ivy are *nestable*, which means that they can accept :class:`ivy.Container` instances in place of **any** of the arguments.
Here, we expand on this explanation.
Please check out the explanation in the `Function Types <function_types.rst>`_ section first.
**Explicitly Nestable Functions**
The *nestable* behaviour is added to any function which is decorated with the `handle_nestable <https://github.com/unifyai/ivy/blob/b725ed10bca15f6f10a0e5154af10231ca842da2/ivy/func_wrapper.py#L429>`_ wrapper.
This wrapper causes the function to be applied at each leaf of any containers passed in the input.
More information on this can be found in the `Function Wrapping <https://github.com/unifyai/ivy/blob/b725ed10bca15f6f10a0e5154af10231ca842da2/docs/partial_source/deep_dive/function_wrapping.rst>`_ section of the Deep Dive.
Additionally, any nestable function which returns multiple arrays, will return the same number of containers for its container counterpart.
This property makes the function symmetric with regards to the input-output behavior, irrespective of whether :class:`ivy.Array` or :class:`ivy.Container` instances are used.
Any argument in the input can be replaced with a container without changing the number of inputs, and the presence or absence of ivy.Container instances in the input should not change the number of return values of the function.
In other words, if containers are detected in the input, then we should return a separate container for each array that the function would otherwise return.
The current implementation checks if the leaves of the container have a list of arrays.
If they do, this container is then unstacked to multiple containers(as many as the number of arrays), which are then returned inside a list.
**Implicitly Nestable Functions**
*Compositional* functions are composed of other nestable functions, and hence are already **implicitly nestable**.
So, we do not need to explicitly wrap it at all.
Let's take the function :func:`ivy.cross_entropy` as an example.
The internally called functions are: :func:`ivy.clip`, :func:`ivy.log`, :func:`ivy.sum` and :func:`ivy.negative`, each of which are themselves *nestable*.
.. code-block:: python
def cross_entropy(
true: Union[ivy.Array, ivy.NativeArray],
pred: Union[ivy.Array, ivy.NativeArray],
/,
*,
axis: Optional[int] = -1,
epsilon: float =1e-7,
out: Optional[ivy.Array] = None
) -> ivy.Array:
pred = ivy.clip(pred, epsilon, 1 - epsilon)
log_pred = ivy.log(pred)
return ivy.negative(ivy.sum(log_pred * true, axis, out=out), out=out)
Therefore, when passing an :class:`ivy.Container` instance in the input, each internal function will, in turn, correctly handle the container, and return a new container with the correct operations having been performed.
This makes it very easy and intuitive to debug the code, as the code is stepped through chronologically.
In effect, all leaves of the input container are being processed concurrently, during the computation steps of the :func:`ivy.cross_entropy` function.
However, what if we had added the `handle_nestable <https://github.com/unifyai/ivy/blob/5f58c087906a797b5cb5603714d5e5a532fc4cd4/ivy/func_wrapper.py#L407>`_ wrapping as a decorator directly to the function :func:`ivy.cross_entropy`?
In this case, the :func:`ivy.cross_entropy` function would itself be called multiple times, on each of the leaves of the container.
The functions :func:`ivy.clip`, :func:`ivy.log`, :func:`ivy.sum` and :func:`ivy.negative` would each only consume and return arrays, and debugging the :func:`ivy.cross_entropy` function would then become less intuitively chronological, with each leaf of the input container now processed sequentially, rather than concurrently.
Therefore, our approach is to **not** wrap any compositional functions which are already *implicitly nestable* as a result of the *nestable* functions called internally.
**Explicitly Nestable Compositional Functions**
There may be some compositional functions which are not implicitly nestable for some reason, and in such cases adding the explicit `handle_nestable <https://github.com/unifyai/ivy/blob/5f58c087906a797b5cb5603714d5e5a532fc4cd4/ivy/func_wrapper.py#L407>`_ wrapping may be necessary.
One such example is the :func:`ivy.linear` function which is not implicitly nestable despite being compositional. This is because of the use of special functions like :func:`__len__` and :func:`__list__` which, among other functions, are not nestable and can't be made nestable.
But we should try to avoid this, in order to make the flow of computation as intuitive to the user as possible.
When tracing the code, the computation graph is **identical** in either case, and there will be no implications on performance whatsoever.
The implicit nestable solution may be slightly less efficient in eager mode, as the leaves of the container are traversed multiple times rather than once, but if performance is of concern then the code should always be traced in any case.
The distinction is only really relevant when stepping through and debugging with eager mode execution, and for the reasons outlined above, the preference is to keep compositional functions implicitly nestable where possible.
**Shared Nested Structure**
When the nested structures of the multiple containers are *shared* but not *identical*, then the behaviour of the nestable function is a bit different.
Containers have *shared* nested structures if all unique leaves in any of the containers are children of a nested structure which is shared by all other containers.
Take the example below, the nested structures of containers :code:`x` and :code:`y` are shared but not identical.
.. code-block:: python
x = ivy.Container(a={'b': 2, 'c': 4}, d={'e': 6, 'f': 9})
y = ivy.Container(a=2, d=3)
The shared key chains (chains of keys, used for indexing the container) are :code:`a` and :code:`d`.
The key chains unique to :code:`x` are :code:`a/b`, :code:`a/c`, :code:`d/e` and :code:`d/f`.
The unique key chains all share the same base structure as all other containers (in this case only one other container, :code:`y`).
Therefore, the containers :code:`x` and :code:`y` have a shared nested structure.
When calling *nestable* functions on containers with non-identical structure, then the shared leaves of the shallowest container are broadcast to the leaves of the deepest container.
It's helpful to look at an example:
.. code-block:: python
print(x / y)
{
a: {
b: 1.0,
c: 2.0
},
d: {
e: 2.0,
f: 3.0
}
}
In this case, the integer at :code:`y.a` is broadcast to the leaves :code:`x.a.b` and :code:`x.a.c`, and the integer at :code:`y.d` is broadcast to the leaves :code:`x.d.e` and :code:`x.d.f`.
Another example of containers with shared nested structure is given below:
.. code-block:: python
x = ivy.Container(a={'b': 2, 'c': 4}, d={'e': 6, 'f': 8})
y = ivy.Container(a=2, d=3)
z = ivy.Container(a={'b': 10, 'c': {'g': 11, 'h': 12}}, d={'e': 13, 'f': 14})
Adding these containers together would result in the following:
.. code-block:: python
print(x + y + z)
{
a: {
b: 14,
c: {
g: 17,
h: 18,
}
},
d: {
e: 22,
f: 25
}
}
An example of containers which **do not** have a shared nested structure is given below:
.. code-block:: python
x = ivy.Container(a={'b': 2, 'c': 4}, d={'e': 6, 'f': 8})
y = ivy.Container(a=2, d=3, g=4)
z = ivy.Container(a={'b': 10, 'c': {'g': 11, 'h': 12}}, d={'e': 13, 'g': 14})
This is for three reasons, (a) the key chain :code:`g` is not shared by any container other than :code:`y`, (b) the key chain :code:`d/f` for :code:`x` is not present in :code:`z` despite :code:`d` not being a non-leaf node in :code:`z`, and likewise the key chain :code:`d/g` for :code:`z` is not present in :code:`x` despite :code:`d` not being a non-leaf node in :code:`x`.
**Round Up**
This should have hopefully given you a good feel for containers, and how these are handled in Ivy.
If you have any questions, please feel free to reach out on `discord`_ in the `containers thread`_!
**Video**
.. raw:: html
<iframe width="420" height="315" allow="fullscreen;"
src="https://www.youtube.com/embed/oHcoYFi2rvI" class="video">
</iframe>
| ivy/docs/overview/deep_dive/containers.rst/0 | {
"file_path": "ivy/docs/overview/deep_dive/containers.rst",
"repo_id": "ivy",
"token_count": 7349
} | 4 |
Ivy-Lint: Ivy's Custom Code Formatters
======================================
Overview
--------
``ivy-lint`` is a specialized suite of formatters crafted for the Ivy codebase. It addresses unique formatting requirements not catered to by standard Python formatters. While the suite currently highlights the ``FunctionOrderingFormatter``, we're continually expanding to include more formatters tailored to Ivy's needs.
Existing Formatters
-------------------
FunctionOrderingFormatter
~~~~~~~~~~~~~~~~~~~~~~~~~
This formatter ensures a standardized order of declarations within Python files, organizing functions, classes, and assignments based on a hierarchy designed for the Ivy codebase.
**Purpose**: To bring a sense of uniformity and structure to the code files by sorting various Python declarations.
**Target Files**: Specifically designed for frontends and tests.
How the Formatter Works:
~~~~~~~~~~~~~~~~~~~~~~~~
1. **Header Management**:
- Removes pre-existing headers in the source code based on specific patterns.
2. **Comments Handling**:
- Extracts code components along with their leading comments, ensuring that relevant comments are retained during the reordering process.
3. **Dependency Handling**:
- Constructs dependency graphs to understand and maintain the relationships between classes and assignments.
4. **Sorting Logic**:
- Prioritizes imports, followed by assignments based on certain dependencies, then classes, and finally functions.
- Preserves module-level docstrings at the top of the file.
- Organizes helper functions and primary functions into separate sections for clarity.
5. **File Processing**:
- Processes files that align with certain patterns, rearranging their content as needed.
Integration and Usage
---------------------
To get the best out of ``ivy-lint``, integrate it within a pre-commit hook. This ensures that whenever code changes are about to be committed, the suite checks and, if needed, formats the files to align with Ivy's standards.
For comprehensive details on weaving ``ivy-lint`` into your development practices, kindly refer to our `formatting guide <formatting.rst>`_.
Contribution
------------
We’re always thrilled to welcome contributions to ``ivy-lint``. If you're brimming with ideas for a new formatter or can enhance our existing ones, please connect with us either on our GitHub repository or our `discord <https://discord.gg/Y3prZYHS>`_ channel.
Round Up
--------
``ivy-lint`` stands as a testament to Ivy's commitment to code clarity and uniformity. As the landscape of our needs shifts, we foresee further refining and expanding our suite of formatters.
For all discussions or inquiries, you're always welcome on `discord <https://discord.gg/Y3prZYHS>`_ in the `formatting thread <https://discord.com/channels/799879767196958751/1190247322626572408>`_.
| ivy/docs/overview/deep_dive/ivy_lint.rst/0 | {
"file_path": "ivy/docs/overview/deep_dive/ivy_lint.rst",
"repo_id": "ivy",
"token_count": 681
} | 5 |
Motivation
==========
| (a) `ML Explosion <motivation/ml_explosion.rst>`_
| A huge number of ML tools have exploded onto the scene!
|
| (b) `Why Unify? <motivation/why_unify.rst>`_
| Why should we try to unify them?
|
| (c) `Standardization <motivation/standardization.rst>`_
| We’re collaborating with The `Consortium for Python Data API Standards <https://data-apis.org>`_
.. toctree::
:hidden:
:maxdepth: -1
:caption: Background
motivation/ml_explosion.rst
motivation/why_unify.rst
motivation/standardization.rst
| ivy/docs/overview/motivation.rst/0 | {
"file_path": "ivy/docs/overview/motivation.rst",
"repo_id": "ivy",
"token_count": 195
} | 6 |
.. _`RWorks Vendor-Specific APIs`:
Vendor-Specific APIs
====================
.. _`CUDA`: https://developer.nvidia.com/cuda-toolkit
.. _`TensorRT`: https://developer.nvidia.com/tensorrt
.. _`NVIDIA`: https://www.nvidia.com/
.. _`PyTorch`: https://pytorch.org/
.. _`TensorFlow`: https://www.tensorflow.org/
.. _`Compute Unified Device Architecture (CUDA)`: https://developer.nvidia.com/cuda-toolkit
.. _`discord`: https://discord.gg/sXyFF8tDtm
.. |tensorrt| image:: https://raw.githubusercontent.com/unifyai/unifyai.github.io/main/img/externally_linked/related_work/vendor_specific_apis/tensorrt.png
:height: 15pt
:class: dark-light
.. |cuda| image:: https://raw.githubusercontent.com/unifyai/unifyai.github.io/main/img/externally_linked/related_work/vendor_specific_apis/cuda.png
:height: 20pt
:class: dark-light
Vendor-specific APIs provide an interface to define customized operations for hardware from specific vendors.
The libraries are written exclusively for hardware from this vendor, and so the code is clearly not generalized nor is it intended to be.
These APIs are often used by higher level multi-vendor compilers and frameworks, and most machine learning practitioners will not interface with these low level vendor-specific APIs directly.
TensorRT |tensorrt|
-------------------
Built on top of `CUDA`_, `TensorRT`_ is a C++ library for high performance inference on `NVIDIA`_ GPUs and deep learning accelerators.
It is integrated with `PyTorch`_ and TensorFlow.
When conducting deep learning training in a proprietary or custom framework, then the TensorRT C++ API can be used to import and accelerate models.
Several optimizations contribute to the high performance: reduced mixed precision maximizes throughput, layer and tensor fusion optimizes device memory, kernel autotuning selects the best data layers and algorithms, time fusion optimizes recurrent neural networks, multi-stream execution manages input streams, and dynamic tensor memory optimizes memory consumption.
CUDA |cuda|
-----------
`Compute Unified Device Architecture (CUDA)`_ is a parallel computing platform and application programming interface (API) that allows software to use certain types of graphics processing units (GPUs) for general purpose processing, an approach called general-purpose computing on GPUs (GPGPU).
It is a software layer that gives direct access to the GPU's virtual instruction set and parallel computational elements, for the execution of compute kernels.
It is designed to work with programming languages such as C, C++, and Fortran.
| ivy/docs/overview/related_work/vendor_specific_apis.rst/0 | {
"file_path": "ivy/docs/overview/related_work/vendor_specific_apis.rst",
"repo_id": "ivy",
"token_count": 684
} | 7 |
# flake8: noqa
# global
import copy
import functools
import numpy as np
from operator import mul
from typing import Optional
# local
import ivy
from .conversions import args_to_native, to_ivy
from .activations import _ArrayWithActivations
from .creation import _ArrayWithCreation
from .data_type import _ArrayWithDataTypes
from .device import _ArrayWithDevice
from .elementwise import _ArrayWithElementwise
from .general import _ArrayWithGeneral
from .gradients import _ArrayWithGradients
from .image import _ArrayWithImage
from .layers import _ArrayWithLayers
from .linear_algebra import _ArrayWithLinearAlgebra
from .losses import _ArrayWithLosses
from .manipulation import _ArrayWithManipulation
from .norms import _ArrayWithNorms
from .random import _ArrayWithRandom
from .searching import _ArrayWithSearching
from .set import _ArrayWithSet
from .sorting import _ArrayWithSorting
from .statistical import _ArrayWithStatistical
from .utility import _ArrayWithUtility
from ivy.func_wrapper import handle_view_indexing
from .experimental import (
_ArrayWithSearchingExperimental,
_ArrayWithActivationsExperimental,
_ArrayWithConversionsExperimental,
_ArrayWithCreationExperimental,
_ArrayWithData_typeExperimental,
_ArrayWithDeviceExperimental,
_ArrayWithElementWiseExperimental,
_ArrayWithGeneralExperimental,
_ArrayWithGradientsExperimental,
_ArrayWithImageExperimental,
_ArrayWithLayersExperimental,
_ArrayWithLinearAlgebraExperimental,
_ArrayWithLossesExperimental,
_ArrayWithManipulationExperimental,
_ArrayWithNormsExperimental,
_ArrayWithRandomExperimental,
_ArrayWithSetExperimental,
_ArrayWithSortingExperimental,
_ArrayWithStatisticalExperimental,
_ArrayWithUtilityExperimental,
)
class Array(
_ArrayWithActivations,
_ArrayWithCreation,
_ArrayWithDataTypes,
_ArrayWithDevice,
_ArrayWithElementwise,
_ArrayWithGeneral,
_ArrayWithGradients,
_ArrayWithImage,
_ArrayWithLayers,
_ArrayWithLinearAlgebra,
_ArrayWithLosses,
_ArrayWithManipulation,
_ArrayWithNorms,
_ArrayWithRandom,
_ArrayWithSearching,
_ArrayWithSet,
_ArrayWithSorting,
_ArrayWithStatistical,
_ArrayWithUtility,
_ArrayWithActivationsExperimental,
_ArrayWithConversionsExperimental,
_ArrayWithCreationExperimental,
_ArrayWithData_typeExperimental,
_ArrayWithDeviceExperimental,
_ArrayWithElementWiseExperimental,
_ArrayWithGeneralExperimental,
_ArrayWithGradientsExperimental,
_ArrayWithImageExperimental,
_ArrayWithLayersExperimental,
_ArrayWithLinearAlgebraExperimental,
_ArrayWithLossesExperimental,
_ArrayWithManipulationExperimental,
_ArrayWithNormsExperimental,
_ArrayWithRandomExperimental,
_ArrayWithSearchingExperimental,
_ArrayWithSetExperimental,
_ArrayWithSortingExperimental,
_ArrayWithStatisticalExperimental,
_ArrayWithUtilityExperimental,
):
def __init__(self, data, dynamic_backend=None):
_ArrayWithActivations.__init__(self)
_ArrayWithCreation.__init__(self)
_ArrayWithDataTypes.__init__(self)
_ArrayWithDevice.__init__(self)
_ArrayWithElementwise.__init__(self)
_ArrayWithGeneral.__init__(self)
_ArrayWithGradients.__init__(self)
_ArrayWithImage.__init__(self)
_ArrayWithLayers.__init__(self)
_ArrayWithLinearAlgebra.__init__(self)
_ArrayWithLosses.__init__(self)
_ArrayWithManipulation.__init__(self)
_ArrayWithNorms.__init__(self)
_ArrayWithRandom.__init__(self)
_ArrayWithSearching.__init__(self)
_ArrayWithSet.__init__(self)
_ArrayWithSorting.__init__(self)
_ArrayWithStatistical.__init__(self)
_ArrayWithUtility.__init__(self)
_ArrayWithActivationsExperimental.__init__(self),
_ArrayWithConversionsExperimental.__init__(self),
_ArrayWithCreationExperimental.__init__(self),
_ArrayWithData_typeExperimental.__init__(self),
_ArrayWithDeviceExperimental.__init__(self),
_ArrayWithElementWiseExperimental.__init__(self),
_ArrayWithGeneralExperimental.__init__(self),
_ArrayWithGradientsExperimental.__init__(self),
_ArrayWithImageExperimental.__init__(self),
_ArrayWithLayersExperimental.__init__(self),
_ArrayWithLinearAlgebraExperimental.__init__(self),
_ArrayWithLossesExperimental.__init__(self),
_ArrayWithManipulationExperimental.__init__(self),
_ArrayWithNormsExperimental.__init__(self),
_ArrayWithRandomExperimental.__init__(self),
_ArrayWithSearchingExperimental.__init__(self),
_ArrayWithSetExperimental.__init__(self),
_ArrayWithSortingExperimental.__init__(self),
_ArrayWithStatisticalExperimental.__init__(self),
_ArrayWithUtilityExperimental.__init__(self),
self._init(data, dynamic_backend)
self._view_attributes(data)
def _init(self, data, dynamic_backend=None):
if ivy.is_ivy_array(data):
self._data = data.data
elif ivy.is_native_array(data):
self._data = data
elif isinstance(data, np.ndarray):
self._data = ivy.asarray(data)._data
elif isinstance(data, (list, tuple)):
self._data = ivy.asarray(data)._data
elif ivy.is_ivy_sparse_array(data):
self._data = data._data
elif ivy.is_native_sparse_array(data):
self._data = data._data
else:
raise ivy.utils.exceptions.IvyException(
"data must be ivy array, native array or ndarray"
)
self._size = None
self._strides = None
self._itemsize = None
self._dtype = None
self._device = None
self._dev_str = None
self._pre_repr = None
self._post_repr = None
self._backend = ivy.current_backend(self._data).backend
if dynamic_backend is not None:
self._dynamic_backend = dynamic_backend
else:
self._dynamic_backend = ivy.dynamic_backend
self.weak_type = False # to handle 0-D jax front weak typed arrays
def _view_attributes(self, data):
self._base = None
self._view_refs = []
self._manipulation_stack = []
self._torch_base = None
self._torch_view_refs = []
self._torch_manipulation = None
# Properties #
# ---------- #
@property
def backend(self):
return self._backend
@property
def dynamic_backend(self):
return self._dynamic_backend
@dynamic_backend.setter
def dynamic_backend(self, value):
from ivy.functional.ivy.gradients import _variable
from ivy.utils.backend.handler import _data_to_new_backend, _get_backend_for_arg
if value:
ivy_backend = ivy.with_backend(self._backend)
if ivy_backend.gradients._is_variable(self.data):
native_var = ivy_backend.gradients._variable_data(
self,
)
data = _data_to_new_backend(native_var, ivy_backend).data
self._data = _variable(data).data
else:
self._data = _data_to_new_backend(self, ivy_backend).data
self._backend = ivy.backend
else:
self._backend = _get_backend_for_arg(self.data.__class__.__module__).backend
self._dynamic_backend = value
@property
def data(self) -> ivy.NativeArray:
"""The native array being wrapped in self."""
return self._data
@property
def dtype(self) -> ivy.Dtype:
"""Data type of the array elements."""
if self._dtype is None:
self._dtype = ivy.dtype(self._data)
return self._dtype
@property
def device(self) -> ivy.Device:
"""Hardware device the array data resides on."""
if self._device is None:
self._device = ivy.dev(self._data)
return self._device
@property
def mT(self) -> ivy.Array:
"""Transpose of a matrix (or a stack of matrices).
Returns
-------
ret
array whose last two dimensions (axes) are permuted in reverse order
relative to original array (i.e., for an array instance having shape
``(..., M, N)``, the returned array must have shape ``(..., N, M)``).
The returned array must have the same data type as the original array.
"""
ivy.utils.assertions.check_greater(
len(self._data.shape), 2, allow_equal=True, as_array=False
)
return ivy.matrix_transpose(self._data)
@property
def ndim(self) -> int:
"""Number of array dimensions (axes)."""
return len(tuple(self._data.shape))
@property
def shape(self) -> ivy.Shape:
"""Array dimensions."""
return ivy.Shape(self._data.shape)
@property
def size(self) -> Optional[int]:
"""Number of elements in the array."""
if self._size is None:
if ivy.current_backend_str() in ["numpy", "jax"]:
self._size = self._data.size
return self._size
self._size = (
functools.reduce(mul, self._data.shape)
if len(self._data.shape) > 0
else 1
)
return self._size
@property
def itemsize(self) -> Optional[int]:
"""Size of array elements in bytes."""
if self._itemsize is None:
self._itemsize = ivy.itemsize(self._data)
return self._itemsize
@property
def strides(self) -> Optional[int]:
"""Get strides across each dimension."""
if self._strides is None:
# for this to work consistently for non-contiguous arrays
# we must pass self to ivy.strides, not self.data
self._strides = ivy.strides(self)
return self._strides
@property
def T(self) -> ivy.Array:
"""Transpose of the array.
Returns
-------
ret
two-dimensional array whose first and last dimensions (axes) are
permuted in reverse order relative to original array.
"""
ivy.utils.assertions.check_equal(len(self._data.shape), 2, as_array=False)
return ivy.matrix_transpose(self._data)
@property
def base(self) -> ivy.Array:
"""Original array referenced by view."""
return self._base
@property
def real(self) -> ivy.Array:
"""Real part of the array.
Returns
-------
ret
array containing the real part of each element in the array.
The returned array must have the same shape and data type as
the original array.
"""
return ivy.real(self._data)
@property
def imag(self) -> ivy.Array:
"""Imaginary part of the array.
Returns
-------
ret
array containing the imaginary part of each element in the array.
The returned array must have the same shape and data type as
the original array.
"""
return ivy.imag(self._data)
# Setters #
# --------#
@data.setter
def data(self, data):
ivy.utils.assertions.check_true(
ivy.is_native_array(data), "data must be native array"
)
self._init(data)
# Built-ins #
# ----------#
@classmethod
def __torch_function__(cls, func, types, args=(), kwargs={}):
args, kwargs = args_to_native(*args, **kwargs)
return func(*args, **kwargs)
def __ivy_array_function__(self, func, types, args, kwargs):
# Cannot handle items that have __ivy_array_function__ other than those of
# ivy arrays or native arrays.
for t in types:
if (
hasattr(t, "__ivy_array_function__")
and (t.__ivy_array_function__ is not ivy.Array.__ivy_array_function__)
or (
hasattr(ivy.NativeArray, "__ivy_array_function__")
and (
t.__ivy_array_function__
is not ivy.NativeArray.__ivy_array_function__
)
)
):
return NotImplemented
# Arguments contain no overrides, so we can safely call the
# overloaded function again.
return func(*args, **kwargs)
def __array__(self, *args, **kwargs):
args, kwargs = args_to_native(*args, **kwargs)
return self._data.__array__(*args, dtype=self.dtype, **kwargs)
def __array_prepare__(self, *args, **kwargs):
args, kwargs = args_to_native(*args, **kwargs)
return self._data.__array_prepare__(*args, **kwargs)
def __array_ufunc__(self, *args, **kwargs):
args, kwargs = args_to_native(*args, **kwargs)
return self._data.__array_ufunc__(*args, **kwargs)
def __array_wrap__(self, *args, **kwargs):
args, kwargs = args_to_native(*args, **kwargs)
return self._data.__array_wrap__(*args, **kwargs)
def __array_namespace__(self, api_version=None):
return ivy
def __repr__(self):
if self._dev_str is None:
self._dev_str = ivy.as_ivy_dev(self.device)
self._pre_repr = "ivy.array"
if "gpu" in self._dev_str:
self._post_repr = f", dev={self._dev_str})"
else:
self._post_repr = ")"
sig_fig = ivy.array_significant_figures
dec_vals = ivy.array_decimal_values
if self.backend == "" or ivy.is_local():
# If the array was constructed using implicit backend
backend = ivy.current_backend()
else:
# Requirerd in the case that backend is different
# from the currently set backend
backend = ivy.with_backend(self.backend)
arr_np = backend.to_numpy(self._data)
rep = (
np.array(ivy.vec_sig_fig(arr_np, sig_fig))
if self.size > 0
else np.array(arr_np)
)
with np.printoptions(precision=dec_vals):
repr = rep.__repr__()[:-1].partition(", dtype")[0].partition(", dev")[0]
return (
self._pre_repr
+ repr[repr.find("(") :]
+ self._post_repr.format(ivy.current_backend_str())
)
def __dir__(self):
return self._data.__dir__()
def __getattribute__(self, item):
return super().__getattribute__(item)
def __getattr__(self, item):
try:
attr = self._data.__getattribute__(item)
except AttributeError:
attr = self._data.__getattr__(item)
return to_ivy(attr)
@handle_view_indexing
def __getitem__(self, query):
return ivy.get_item(self._data, query)
def __setitem__(self, query, val):
self._data = ivy.set_item(self._data, query, val)._data
def __contains__(self, key):
return self._data.__contains__(key)
def __getstate__(self):
data_dict = {}
# only pickle the native array
data_dict["data"] = self.data
# also store the local ivy framework that created this array
data_dict["backend"] = self.backend
data_dict["device_str"] = ivy.as_ivy_dev(self.device)
return data_dict
def __setstate__(self, state):
# we can construct other details of ivy.Array
# just by re-creating the ivy.Array using the native array
# get the required backend
(
ivy.set_backend(state["backend"])
if state["backend"] is not None and len(state["backend"]) > 0
else ivy.current_backend(state["data"])
)
ivy_array = ivy.array(state["data"])
ivy.previous_backend()
self.__dict__ = ivy_array.__dict__
# TODO: what about placement of the array on the right device ?
# device = backend.as_native_dev(state["device_str"])
# backend.to_device(self, device)
def __pos__(self):
return ivy.positive(self._data)
def __neg__(self):
return ivy.negative(self._data)
def __pow__(self, power):
"""ivy.Array special method variant of ivy.pow. This method simply
wraps the function, and so the docstring for ivy.pow also applies to
this method with minimal changes.
Parameters
----------
self
Input array or float.
power
Array or float power. Must be compatible with ``self``
(see :ref:`broadcasting`). Should have a numeric data type.
Returns
-------
ret
an array containing the element-wise sums. The returned array must have a
data type determined by :ref:`type-promotion`.
Examples
--------
With :class:`ivy.Array` input:
>>> x = ivy.array([1, 2, 3])
>>> y = x ** 2
>>> print(y)
ivy.array([1, 4, 9])
>>> x = ivy.array([1.2, 2.1, 3.5])
>>> y = x ** 2.9
>>> print(y)
ivy.array([ 1.69678056, 8.59876156, 37.82660675])
"""
return ivy.pow(self._data, power)
def __rpow__(self, power):
return ivy.pow(power, self._data)
def __ipow__(self, power):
return ivy.pow(self._data, power)
def __add__(self, other):
"""ivy.Array special method variant of ivy.add. This method simply
wraps the function, and so the docstring for ivy.add also applies to
this method with minimal changes.
Parameters
----------
self
first input array. Should have a numeric data type.
other
second input array. Must be compatible with ``self``
(see :ref:`broadcasting`). Should have a numeric data type.
Returns
-------
ret
an array containing the element-wise sums. The returned array must have a
data type determined by :ref:`type-promotion`.
Examples
--------
>>> x = ivy.array([1, 2, 3])
>>> y = ivy.array([4, 5, 6])
>>> z = x + y
>>> print(z)
ivy.array([5, 7, 9])
"""
return ivy.add(self._data, other)
def __radd__(self, other):
"""ivy.Array reverse special method variant of ivy.add. This method
simply wraps the function, and so the docstring for ivy.add also
applies to this method with minimal changes.
Parameters
----------
self
first input array. Should have a numeric data type.
other
second input array. Must be compatible with ``self``
(see :ref:`broadcasting`). Should have a numeric data type.
Returns
-------
ret
an array containing the element-wise sums. The returned array must have a
data type determined by :ref:`type-promotion`.
Examples
--------
>>> x = 1
>>> y = ivy.array([4, 5, 6])
>>> z = x + y
>>> print(z)
ivy.array([5, 6, 7])
"""
return ivy.add(other, self._data)
def __iadd__(self, other):
return ivy.add(self._data, other)
def __sub__(self, other):
"""ivy.Array special method variant of ivy.subtract. This method simply
wraps the function, and so the docstring for ivy.subtract also applies
to this method with minimal changes.
Parameters
----------
self
first input array. Should have a numeric data type.
other
second input array. Must be compatible with ``self``
(see :ref:`broadcasting`). Should have a numeric data type.
Returns
-------
ret
an array containing the element-wise differences. The returned array must have a
data type determined by :ref:`type-promotion`.
Examples
--------
With :class:`ivy.Array` instances only:
>>> x = ivy.array([1, 2, 3])
>>> y = ivy.array([4, 5, 6])
>>> z = x - y
>>> print(z)
ivy.array([-3, -3, -3])
"""
return ivy.subtract(self._data, other)
def __rsub__(self, other):
"""ivy.Array reverse special method variant of ivy.subtract. This
method simply wraps the function, and so the docstring for ivy.subtract
also applies to this method with minimal changes.
Parameters
----------
self
first input array. Should have a numeric data type.
other
second input array. Must be compatible with ``self``
(see :ref:`broadcasting`). Should have a numeric data type.
Returns
-------
ret
an array containing the element-wise differences. The returned array must have a
data type determined by :ref:`type-promotion`.
Examples
--------
>>> x = 1
>>> y = ivy.array([4, 5, 6])
>>> z = x - y
>>> print(z)
ivy.array([-3, -4, -5])
"""
return ivy.subtract(other, self._data)
def __isub__(self, other):
return ivy.subtract(self._data, other)
def __mul__(self, other):
return ivy.multiply(self._data, other)
def __rmul__(self, other):
return ivy.multiply(other, self._data)
def __imul__(self, other):
return ivy.multiply(self._data, other)
def __mod__(self, other):
return ivy.remainder(self._data, other)
def __rmod__(self, other):
return ivy.remainder(other, self._data)
def __imod__(self, other):
return ivy.remainder(self._data, other)
def __divmod__(self, other):
return ivy.divide(self._data, other), ivy.remainder(self._data, other)
def __rdivmod__(self, other):
return ivy.divide(other, self._data), ivy.remainder(other, self._data)
def __truediv__(self, other):
"""ivy.Array reverse special method variant of ivy.divide. This method
simply wraps the function, and so the docstring for ivy.divide also
applies to this method with minimal changes.
Parameters
----------
self
first input array. Should have a numeric data type.
other
second input array. Must be compatible with ``self``
(see :ref:`broadcasting`). Should have a numeric data type.
Returns
-------
ret
an array containing the element-wise results. The returned array must have a
data type determined by :ref:`type-promotion`.
Examples
--------
>>> x = ivy.array([1, 2, 3])
>>> y = ivy.array([4, 5, 6])
>>> z = x / y
>>> print(z)
ivy.array([0.25 , 0.40000001, 0.5 ])
"""
return ivy.divide(self._data, other)
def __rtruediv__(self, other):
return ivy.divide(other, self._data)
def __itruediv__(self, other):
return ivy.divide(self._data, other)
def __floordiv__(self, other):
return ivy.floor_divide(self._data, other)
def __rfloordiv__(self, other):
return ivy.floor_divide(other, self._data)
def __ifloordiv__(self, other):
return ivy.floor_divide(self._data, other)
def __matmul__(self, other):
return ivy.matmul(self._data, other)
def __rmatmul__(self, other):
return ivy.matmul(other, self._data)
def __imatmul__(self, other):
return ivy.matmul(self._data, other)
def __abs__(self):
"""ivy.Array special method variant of ivy.abs. This method simply
wraps the function, and so the docstring for ivy.abs also applies to
this method with minimal changes.
Parameters
----------
self
input array. Should have a numeric data type.
Returns
-------
ret
an array containing the absolute value of each element
in ``self``. The returned array must have the same data
type as ``self``.
Examples
--------
With :class:`ivy.Array` input:
>>> x = ivy.array([6, -2, 0, -1])
>>> print(abs(x))
ivy.array([6, 2, 0, 1])
>>> x = ivy.array([-1.2, 1.2])
>>> print(abs(x))
ivy.array([1.2, 1.2])
"""
return ivy.abs(self._data)
def __float__(self):
if hasattr(self._data, "__float__"):
if "complex" in self.dtype:
res = float(self.real)
else:
res = self._data.__float__()
else:
res = float(ivy.to_scalar(self._data))
if res is NotImplemented:
return res
return to_ivy(res)
def __int__(self):
if hasattr(self._data, "__int__"):
if "complex" in self.dtype:
res = int(self.real)
else:
res = self._data.__int__()
else:
res = int(ivy.to_scalar(self._data))
if res is NotImplemented:
return res
return to_ivy(res)
def __complex__(self):
res = complex(ivy.to_scalar(self._data))
if res is NotImplemented:
return res
return to_ivy(res)
def __bool__(self):
return self._data.__bool__()
def __dlpack__(self, stream=None):
# Not completely supported yet as paddle and tf
# doesn't support __dlpack__ and __dlpack_device__ dunders right now
# created issues
# paddle https://github.com/PaddlePaddle/Paddle/issues/56891
# tf https://github.com/tensorflow/tensorflow/issues/61769
return ivy.to_dlpack(self)
def __dlpack_device__(self):
return self._data.__dlpack_device__()
def __lt__(self, other):
"""ivy.Array special method variant of ivy.less. This method simply
wraps the function, and so the docstring for ivy.less also applies to
this method with minimal changes.
Parameters
----------
self
first input array. May have any data type.
other
second input array. Must be compatible with x1 (with Broadcasting). May have any
data type.
Returns
-------
ret
an array containing the element-wise results. The returned array must have a
data type of bool.
Examples
--------
>>> x = ivy.array([6, 2, 3])
>>> y = ivy.array([4, 5, 3])
>>> z = x < y
>>> print(z)
ivy.array([ False, True, False])
"""
return ivy.less(self._data, other)
def __le__(self, other):
"""ivy.Array special method variant of ivy.less_equal. This method
simply wraps the function, and so the docstring for ivy.less_equal also
applies to this method with minimal changes.
Parameters
----------
self
first input array. May have any data type.
other
second input array. Must be compatible with x1 (with Broadcasting). May have any
data type.
Returns
-------
ret
an array containing the element-wise results. The returned array must have a
data type of bool.
Examples
--------
>>> x = ivy.array([6, 2, 3])
>>> y = ivy.array([4, 5, 3])
>>> z = x <= y
>>> print(z)
ivy.array([ False, True, True])
"""
return ivy.less_equal(self._data, other)
def __eq__(self, other):
"""ivy.Array special method variant of ivy.equal. This method simply
wraps the function, and so the docstring for ivy.equal also applies to
this method with minimal changes.
Parameters
----------
self
first input array. May have any data type.
other
second input array. Must be compatible with x1 (with Broadcasting). May have any
data type.
Returns
-------
ret
an array containing the element-wise results. The returned array must have a
data type of bool.
Examples
--------
With :class:`ivy.Array` instances:
>>> x1 = ivy.array([1, 0, 1, 1])
>>> x2 = ivy.array([1, 0, 0, -1])
>>> y = x1 == x2
>>> print(y)
ivy.array([True, True, False, False])
>>> x1 = ivy.array([1, 0, 1, 0])
>>> x2 = ivy.array([0, 1, 0, 1])
>>> y = x1 == x2
>>> print(y)
ivy.array([False, False, False, False])
"""
return ivy.equal(self._data, other)
def __ne__(self, other):
"""ivy.Array special method variant of ivy.not_equal. This method
simply wraps the function, and so the docstring for ivy.not_equal also
applies to this method with minimal changes.
Parameters
----------
self
first input array. May have any data type.
other
second input array. Must be compatible with x1 (with Broadcasting). May have any
data type.
Returns
-------
ret
an array containing the element-wise results. The returned array must have a
data type of bool.
Examples
--------
With :class:`ivy.Array` instances:
>>> x1 = ivy.array([1, 0, 1, 1])
>>> x2 = ivy.array([1, 0, 0, -1])
>>> y = x1 != x2
>>> print(y)
ivy.array([False, False, True, True])
>>> x1 = ivy.array([1, 0, 1, 0])
>>> x2 = ivy.array([0, 1, 0, 1])
>>> y = x1 != x2
>>> print(y)
ivy.array([True, True, True, True])
"""
return ivy.not_equal(self._data, other)
def __gt__(self, other):
"""ivy.Array special method variant of ivy.greater. This method simply
wraps the function, and so the docstring for ivy.greater also applies
to this method with minimal changes.
Parameters
----------
self
first input array. May have any data type.
other
second input array. Must be compatible with x1 (with Broadcasting). May have any
data type.
Returns
-------
ret
an array containing the element-wise results. The returned array must have a
data type of bool.
Examples
--------
With :class:`ivy.Array` instances:
>>> x = ivy.array([6, 2, 3])
>>> y = ivy.array([4, 5, 3])
>>> z = x > y
>>> print(z)
ivy.array([True,False,False])
With mix of :class:`ivy.Array` and :class:`ivy.Container` instances:
>>> x = ivy.array([[5.1, 2.3, -3.6]])
>>> y = ivy.Container(a=ivy.array([[4.], [5.1], [6.]]),b=ivy.array([[-3.6], [6.], [7.]]))
>>> z = x > y
>>> print(z)
{
a: ivy.array([[True, False, False],
[False, False, False],
[False, False, False]]),
b: ivy.array([[True, True, False],
[False, False, False],
[False, False, False]])
}
"""
return ivy.greater(self._data, other)
def __ge__(self, other):
"""ivy.Array special method variant of ivy.greater_equal. This method
simply wraps the function, and so the docstring for ivy.bitwise_xor
also applies to this method with minimal changes.
Parameters
----------
self
first input array. May have any data type.
other
second input array. Must be compatible with x1 (with Broadcasting). May have any
data type.
Returns
-------
ret
an array containing the element-wise results. The returned array must have a
data type of bool.
Examples
--------
With :class:`ivy.Array` instances:
>>> x = ivy.array([6, 2, 3])
>>> y = ivy.array([4, 5, 6])
>>> z = x >= y
>>> print(z)
ivy.array([True,False,False])
With mix of :class:`ivy.Array` and :class:`ivy.Container` instances:
>>> x = ivy.array([[5.1, 2.3, -3.6]])
>>> y = ivy.Container(a=ivy.array([[4.], [5.1], [6.]]),b=ivy.array([[5.], [6.], [7.]]))
>>> z = x >= y
>>> print(z)
{
a: ivy.array([[True, False, False],
[True, False, False],
[False, False, False]]),
b: ivy.array([[True, False, False],
[False, False, False],
[False, False, False]])
}
"""
return ivy.greater_equal(self._data, other)
def __and__(self, other):
return ivy.bitwise_and(self._data, other)
def __rand__(self, other):
return ivy.bitwise_and(other, self._data)
def __iand__(self, other):
return ivy.bitwise_and(self._data, other)
def __or__(self, other):
return ivy.bitwise_or(self._data, other)
def __ror__(self, other):
return ivy.bitwise_or(other, self._data)
def __ior__(self, other):
return ivy.bitwise_or(self._data, other)
def __invert__(self):
return ivy.bitwise_invert(self._data)
def __xor__(self, other):
"""ivy.Array special method variant of ivy.bitwise_xor. This method
simply wraps the function, and so the docstring for ivy.bitwise_xor
also applies to this method with minimal changes.
Parameters
----------
self
first input array. Should have an integer or boolean data type.
other
second input array. Must be compatible with ``x1`` (see :ref:`broadcasting`).
Should have an integer or boolean data type.
out
optional output array, for writing the result to. It must have a shape that the
inputs broadcast to.
Returns
-------
ret
an array containing the element-wise results. The returned array must have a
data type determined by :ref:`type-promotion`.
Examples
--------
With :class:`ivy.Array` instances:
>>> a = ivy.array([1, 2, 3])
>>> b = ivy.array([3, 2, 1])
>>> y = a ^ b
>>> print(y)
ivy.array([2,0,2])
With mix of :class:`ivy.Array` and :class:`ivy.Container` instances:
>>> x = ivy.Container(a = ivy.array([-67, 21]))
>>> y = ivy.array([12, 13])
>>> z = x ^ y
>>> print(z)
{a: ivy.array([-79, 24])}
"""
return ivy.bitwise_xor(self._data, other)
def __rxor__(self, other):
return ivy.bitwise_xor(other, self._data)
def __ixor__(self, other):
return ivy.bitwise_xor(self._data, other)
def __lshift__(self, other):
return ivy.bitwise_left_shift(self._data, other)
def __rlshift__(self, other):
return ivy.bitwise_left_shift(other, self._data)
def __ilshift__(self, other):
return ivy.bitwise_left_shift(self._data, other)
def __rshift__(self, other):
"""ivy.Array special method variant of ivy.bitwise_right_shift. This
method simply wraps the function, and so the docstring for
ivy.bitwise_right_shift also applies to this method with minimal
changes.
Parameters
----------
self
first input array. Should have an integer data type.
other
second input array. Must be compatible with ``x1`` (see :ref:`broadcasting`).
Should have an integer data type. Each element must be greater than or equal
to ``0``.
Returns
-------
ret
an array containing the element-wise results. The returned array must have
a data type determined by :ref:`type-promotion`.
Examples
--------
With :class:`ivy.Array` instances only:
>>> a = ivy.array([2, 3, 4])
>>> b = ivy.array([0, 1, 2])
>>> y = a >> b
>>> print(y)
ivy.array([2, 1, 1])
"""
return ivy.bitwise_right_shift(self._data, other)
def __rrshift__(self, other):
"""ivy.Array reverse special method variant of ivy.bitwise_right_shift.
This method simply wraps the function, and so the docstring for
ivy.bitwise_right_shift also applies to this method with minimal
changes.
Parameters
----------
self
first input array. Should have an integer data type.
other
second input array. Must be compatible with ``x1`` (see :ref:`broadcasting`).
Should have an integer data type. Each element must be greater than or equal
to ``0``.
Returns
-------
ret
an array containing the element-wise results. The returned array must have
a data type determined by :ref:`type-promotion`.
Examples
--------
>>> a = 32
>>> b = ivy.array([0, 1, 2])
>>> y = a >> b
>>> print(y)
ivy.array([32, 16, 8])
"""
return ivy.bitwise_right_shift(other, self._data)
def __irshift__(self, other):
return ivy.bitwise_right_shift(self._data, other)
def __deepcopy__(self, memodict={}):
try:
return to_ivy(self._data.__deepcopy__(memodict))
except AttributeError:
# ToDo: try and find more elegant solution to jax inability to
# deepcopy device arrays
if ivy.current_backend_str() == "jax":
np_array = copy.deepcopy(self._data)
jax_array = ivy.array(np_array)
return to_ivy(jax_array)
return to_ivy(copy.deepcopy(self._data))
except RuntimeError:
from ivy.functional.ivy.gradients import _is_variable
# paddle and torch don't support the deepcopy protocol on non-leaf tensors
if _is_variable(self):
return to_ivy(copy.deepcopy(ivy.stop_gradient(self)._data))
return to_ivy(copy.deepcopy(self._data))
def __len__(self):
if not len(self._data.shape):
return 0
try:
return len(self._data)
except TypeError:
return self._data.shape[0]
def __iter__(self):
if self.ndim == 0:
raise TypeError("iteration over a 0-d ivy.Array not supported")
if ivy.current_backend_str() == "paddle":
if self.dtype in ["int8", "int16", "uint8", "float16"]:
return iter([to_ivy(i) for i in ivy.unstack(self._data)])
elif self.ndim == 1:
return iter([to_ivy(i).squeeze(axis=0) for i in self._data])
return iter([to_ivy(i) for i in self._data])
| ivy/ivy/data_classes/array/array.py/0 | {
"file_path": "ivy/ivy/data_classes/array/array.py",
"repo_id": "ivy",
"token_count": 17731
} | 8 |
# global
import abc
from typing import Optional, Union, Tuple, List, Literal, Sequence, Callable
# local
import ivy
class _ArrayWithLayersExperimental(abc.ABC):
def max_pool1d(
self: ivy.Array,
kernel: Union[int, Tuple[int, ...]],
strides: Union[int, Tuple[int, ...]],
padding: Union[str, int, Tuple[int], List[Tuple[int, int]]],
/,
*,
data_format: str = "NWC",
dilation: Union[int, Tuple[int]] = 1,
ceil_mode: bool = False,
out: Optional[ivy.Array] = None,
) -> ivy.Array:
"""ivy.Array instance method variant of `ivy.max_pool1d`. This method
simply wraps the function, and so the docstring for `ivy.max_pool1d`
also applies to this method with minimal changes.
Parameters
----------
self
Input image *[batch_size,w,d_in]*.
kernel
The size of the window for each dimension of the input tensor.
strides
The stride of the sliding window for each dimension of input.
padding
"SAME" or "VALID" indicating the algorithm, or list indicating
the per-dimension paddings.
data_format
"NWC" or "NCW". Defaults to "NWC".
dilaton
The stride between elements within a sliding window, must be > 0.
ceil_mode
If True, ceil is used instead of floor to compute the output shape.
This ensures that every element is covered by a sliding window.
out
optional output array, for writing the result to. It must have a shape that
the inputs broadcast to.
Returns
-------
ret
The result of the max pooling operation.
Examples
--------
>>> x = ivy.arange(0, 24.).reshape((2, 3, 4))
>>> print(x.max_pool1d(2, 2, 'SAME'))
ivy.array([[[ 4., 5., 6., 7.],
[ 8., 9., 10., 11.]],
[[16., 17., 18., 19.],
[20., 21., 22., 23.]]])
>>> x = ivy.arange(0, 24.).reshape((2, 3, 4))
>>> print(x.max_pool1d(2, 2, 'VALID'))
ivy.array([[[ 4., 5., 6., 7.]],
[[16., 17., 18., 19.]]])
"""
return ivy.max_pool1d(
self,
kernel,
strides,
padding,
data_format=data_format,
dilation=dilation,
ceil_mode=ceil_mode,
out=out,
)
def max_pool2d(
self: ivy.Array,
kernel: Union[int, Tuple[int, ...]],
strides: Union[int, Tuple[int, ...]],
padding: Union[str, int, Tuple[int], List[Tuple[int, int]]],
/,
*,
data_format: str = "NHWC",
dilation: Union[int, Tuple[int, ...]] = 1,
ceil_mode: bool = False,
out: Optional[ivy.Array] = None,
) -> ivy.Array:
"""ivy.Array instance method variant of `ivy.max_pool2d`. This method
simply wraps the function, and so the docstring for `ivy.max_pool2d`
also applies to this method with minimal changes.
Parameters
----------
self
Input image *[batch_size,h,w,d_in]*.
kernel
The size of the window for each dimension of the input tensor.
strides
The stride of the sliding window for each dimension of input.
padding
"SAME" or "VALID" indicating the algorithm, or list indicating
the per-dimension paddings.
data_format
"NHWC" or "NCHW". Defaults to "NHWC".
dilaton
The stride between elements within a sliding window, must be > 0.
ceil_mode
If True, ceil is used instead of floor to compute the output shape.
This ensures that every element is covered by a sliding window.
out
optional output array, for writing the result to. It must have a shape that
the inputs broadcast to.
Returns
-------
ret
The result of the max pooling operation.
Examples
--------
>>> x = ivy.arange(12.).reshape((2, 1, 3, 2))
>>> print(x.max_pool2d((2, 2), (1, 1), 'SAME'))
ivy.array([[[[ 2., 3.],
[ 4., 5.],
[ 4., 5.]]],
[[[ 8., 9.],
[10., 11.],
[10., 11.]]]])
>>> x = ivy.arange(48.).reshape((2, 4, 3, 2))
>>> print(x.max_pool2d(3, 1, 'VALID'))
ivy.array([[[[16., 17.]],
[[22., 23.]]],
[[[40., 41.]],
[[46., 47.]]]])
"""
return ivy.max_pool2d(
self,
kernel,
strides,
padding,
data_format=data_format,
dilation=dilation,
ceil_mode=ceil_mode,
out=out,
)
def max_pool3d(
self: ivy.Array,
kernel: Union[int, Tuple[int, ...]],
strides: Union[int, Tuple[int, ...]],
padding: Union[str, int, Tuple[int], List[Tuple[int, int]]],
/,
*,
data_format: str = "NDHWC",
dilation: Union[int, Tuple[int, ...]] = 1,
ceil_mode: bool = False,
out: Optional[ivy.Array] = None,
) -> ivy.Array:
"""Compute a 3-D max pool given 5-D input x.
Parameters
----------
self
Input volume *[batch_size,d,h,w,d_in]*.
kernel
Convolution filters *[d,h,w]*.
strides
The stride of the sliding window for each dimension of input.
padding
"SAME" or "VALID" indicating the algorithm, or list indicating
the per-dimension paddings.
data_format
NDHWC" or "NCDHW". Defaults to "NDHWC".
dilaton
The stride between elements within a sliding window, must be > 0.
ceil_mode
If True, ceil is used instead of floor to compute the output shape.
This ensures that every element is covered by a sliding window.
out
optional output array, for writing the result to. It must have
a shape that the inputs broadcast to.
Returns
-------
ret
The result of the pooling operation.
Examples
--------
>>> x = ivy.arange(48.).reshape((2, 3, 2, 2, 2))
>>> print(x.max_pool3d(2, 2, 'VALID'))
ivy.array([[[[[14., 15.]]]],
[[[[38., 39.]]]]])
>>> print(x.max_pool3d(2, 2, 'SAME'))
ivy.array([[[[[14., 15.]]],
[[[22., 23.]]]],
[[[[38., 39.]]],
[[[46., 47.]]]]])
"""
return ivy.max_pool3d(
self,
kernel,
strides,
padding,
data_format=data_format,
dilation=dilation,
ceil_mode=ceil_mode,
out=out,
)
def avg_pool1d(
self: ivy.Array,
kernel: Union[int, Tuple[int]],
strides: Union[int, Tuple[int]],
padding: str,
/,
*,
data_format: str = "NWC",
count_include_pad: bool = False,
ceil_mode: bool = False,
out: Optional[ivy.Array] = None,
) -> ivy.Array:
"""ivy.Array instance method variant of `ivy.avg_pool1d`. This method
simply wraps the function, and so the docstring for `ivy.avg_pool1d`
also applies to this method with minimal changes.
Parameters
----------
self
Input image *[batch_size,w,d_in]*.
kernel
The size of the window for each dimension of the input tensor.
strides
The stride of the sliding window for each dimension of input.
padding
"SAME" or "VALID" indicating the algorithm, or list indicating
the per-dimension paddings.
data_format
"NWC" or "NCW". Defaults to "NWC".
count_include_pad
Whether to include padding in the averaging calculation.
ceil_mode
Whether to use ceil or floor for creating the output shape.
out
optional output array, for writing the result to. It must have a shape that
the inputs broadcast to.
Returns
-------
ret
The result of the max pooling operation.
Examples
--------
>>> x = ivy.arange(0, 24.).reshape((2, 3, 4))
>>> print(x.avg_pool1d(2, 2, 'SAME'))
ivy.array([[[ 2., 3., 4., 5.],
[ 8., 9., 10., 11.]],
[[14., 15., 16., 17.],
[20., 21., 22., 23.]]])
>>> x = ivy.arange(0, 24.).reshape((2, 3, 4))
>>> print(x.avg_pool1d(2, 2, 'VALID'))
ivy.array([[[ 2., 3., 4., 5.]],
[[14., 15., 16., 17.]]])
"""
return ivy.avg_pool1d(
self,
kernel,
strides,
padding,
data_format=data_format,
count_include_pad=count_include_pad,
ceil_mode=ceil_mode,
out=out,
)
def avg_pool2d(
self: ivy.Array,
kernel: Union[int, Tuple[int], Tuple[int, int]],
strides: Union[int, Tuple[int], Tuple[int, int]],
padding: str,
/,
*,
data_format: str = "NHWC",
count_include_pad: bool = False,
ceil_mode: bool = False,
divisor_override: Optional[int] = None,
out: Optional[ivy.Array] = None,
) -> ivy.Array:
"""ivy.Array instance method variant of `ivy.avg_pool2d`. This method
simply wraps the function, and so the docstring for `ivy.avg_pool2d`
also applies to this method with minimal changes.
Parameters
----------
x
Input image *[batch_size,h,w,d_in]*.
kernel
The size of the window for each dimension of the input tensor.
strides
The stride of the sliding window for each dimension of input.
padding
"SAME" or "VALID" indicating the algorithm, or list indicating
the per-dimension paddings.
data_format
"NHWC" or "NCHW". Defaults to "NHWC".
count_include_pad
Whether to include padding in the averaging calculation.
ceil_mode
Whether to use ceil or floor for creating the output shape.
divisor_override
If given, it will be used as the divisor,
otherwise kernel_size will be used.
out
optional output array, for writing the result to. It must have a shape that
the inputs broadcast to.
Returns
-------
ret
The result of the max pooling operation.
Examples
--------
>>> x = ivy.arange(12.).reshape((2, 1, 3, 2))
>>> print(x.max_pool2d((2, 2), (1, 1), 'SAME'))
ivy.array([[[[ 2, 3],
[ 4, 5],
[ 4, 5]]],
[[[ 8, 9],
[10, 11],
[10, 11]]]])
>>> x = ivy.arange(48.).reshape((2, 4, 3, 2))
>>> print(x.max_pool2d(3, 1, 'VALID'))
ivy.array([[[[16, 17]],
[[22, 23]]],
[[[40, 41]],
[[46, 47]]]])
"""
return ivy.avg_pool2d(
self,
kernel,
strides,
padding,
data_format=data_format,
count_include_pad=count_include_pad,
ceil_mode=ceil_mode,
divisor_override=divisor_override,
out=out,
)
def avg_pool3d(
self: ivy.Array,
kernel: Union[int, Tuple[int], Tuple[int, int, int]],
strides: Union[int, Tuple[int], Tuple[int, int, int]],
padding: str,
/,
*,
data_format: str = "NDHWC",
count_include_pad: bool = False,
ceil_mode: bool = False,
divisor_override: Optional[int] = None,
out: Optional[ivy.Array] = None,
) -> ivy.Array:
"""Compute a 3-D max pool given 5-D input x.
Parameters
----------
self
Input volume *[batch_size,d,h,w,d_in]*.
kernel
Convolution filters *[d,h,w]*.
strides
The stride of the sliding window for each dimension of input.
padding
SAME" or "VALID" indicating the algorithm, or list indicating
the per-dimension paddings.
data_format
NDHWC" or "NCDHW". Defaults to "NDHWC".
count_include_pad
Whether to include padding in the averaging calculation.
ceil_mode
Whether to use ceil or floor for creating the output shape.
divisor_override
If specified, it will be used as divisor,
otherwise kernel_size will be used.
out
optional output array, for writing the result to. It must have
a shape that the inputs broadcast to.
Returns
-------
ret
The result of the pooling operation.
Examples
--------
>>> x = ivy.arange(48.).reshape((2, 3, 2, 2, 2))
>>> print(x.avg_pool3d(2, 2, 'VALID'))
ivy.array([[[[[ 7., 8.]]]],
[[[[31., 32.]]]]])
>>> print(x.avg_pool3d(2, 2, 'SAME'))
ivy.array([[[[[ 7., 8.]]],
[[[19., 20.]]]],
[[[[31., 32.]]],
[[[43., 44.]]]]])
"""
return ivy.avg_pool3d(
self,
kernel,
strides,
padding,
data_format=data_format,
count_include_pad=count_include_pad,
ceil_mode=ceil_mode,
divisor_override=divisor_override,
out=out,
)
def dct(
self: ivy.Array,
/,
*,
type: Literal[1, 2, 3, 4] = 2,
n: Optional[int] = None,
axis: int = -1,
norm: Optional[Literal["ortho"]] = None,
out: Optional[ivy.Array] = None,
) -> ivy.Array:
"""ivy.Array instance method variant of ivy.dct. This method simply
wraps the function, and so the docstring for ivy.dct also applies to
this method with minimal changes.
Parameters
----------
self
The input signal.
type
The type of the dct. Must be 1, 2, 3 or 4.
n
The length of the transform. If n is less than the input signal length,
then x is truncated, if n is larger than x is zero-padded.
norm
The type of normalization to be applied. Must be either None or "ortho".
out
optional output array, for writing the result to.
Returns
-------
ret
Array containing the transformed input.
Examples
--------
>>> x = ivy.array([8., 16., 24., 32., 40., 48., 56., 64.])
>>> x.dct(type=2, norm="ortho")
ivy.array([ 102., -51.5, 0., -5.39, 0., -1.61, 0., -0.406])
"""
return ivy.dct(
self._data,
type=type,
n=n,
axis=axis,
norm=norm,
out=out,
)
def idct(
self: ivy.Array,
/,
*,
type: Literal[1, 2, 3, 4] = 2,
n: Optional[int] = None,
axis: int = -1,
norm: Optional[Literal["ortho"]] = None,
out: Optional[ivy.Array] = None,
) -> ivy.Array:
"""ivy.Array instance method variant of ivy.idct. This method simply
wraps the function, and so the docstring for ivy.idct also applies to
this method with minimal changes.
Parameters
----------
self
The input signal.
type
The type of the idct. Must be 1, 2, 3 or 4.
n
The length of the transform. If n is less than the input signal length,
then x is truncated, if n is larger than x is zero-padded.
norm
The type of normalization to be applied. Must be either None or "ortho".
out
optional output array, for writing the result to.
Returns
-------
ret
Array containing the transformed input.
Examples
--------
>>> x = ivy.array([8., 16., 24., 32., 40., 48., 56., 64.])
>>> x.idct(type=2, norm="ortho")
ivy.array([ 79.49862671, -70.37691498, 30.00390816, -23.58938599,
13.92713165, -10.078475 , 5.19664812, -1.95411837])
"""
return ivy.idct(
self._data,
type=type,
n=n,
axis=axis,
norm=norm,
out=out,
)
def fft(
self: ivy.Array,
dim: int,
/,
*,
norm: str = "backward",
n: Optional[Union[int, Tuple[int]]] = None,
out: Optional[ivy.Array] = None,
) -> ivy.Array:
"""ivy.Array instance method variant of ivy.ifft. This method simply
wraps the function, and so the docstring for ivy.ifft also applies to
this method with minimal changes.
Parameters
----------
self
Input volume *[...,d_in,...]*,
where d_in indicates the dimension that needs FFT.
dim
The dimension along which to take the one dimensional FFT.
norm
Optional argument, "backward", "ortho" or "forward". Defaults to be
"backward".
"backward" indicates no normalization.
"ortho" indicates normalization by 1/sqrt(n).
"forward" indicates normalization by 1/n.
n
Optional argument indicating the sequence length, if given, the input
would be padded with zero or truncated to length n before performing FFT.
Should be a integer greater than 1.
out
Optional output array, for writing the result to. It must have a shape
that the inputs broadcast to.
Returns
-------
ret
Array containing the transformed input.
Examples
--------
>>> a = ivy.array((np.exp(2j * np.pi * np.arange(8) / 8)))
>>> a.fft(0)
ivy.array([-3.44509285e-16+1.14423775e-17j, 8.00000000e+00-8.11483250e-16j,
2.33486982e-16+1.22464680e-16j, 0.00000000e+00+1.22464680e-16j,
9.95799250e-17+2.33486982e-16j, 0.00000000e+00+7.66951701e-17j,
1.14423775e-17+1.22464680e-16j, 0.00000000e+00+1.22464680e-16j])
"""
return ivy.fft(
self._data,
dim,
norm=norm,
n=n,
out=out,
)
def ifft(
self: ivy.Array,
dim: int,
*,
norm: str = "backward",
n: Optional[Union[int, Tuple[int]]] = None,
out: Optional[ivy.Array] = None,
) -> ivy.Array:
"""ivy.Array instance method variant of ivy.ifft. This method simply
wraps the function, and so the docstring for ivy.ifft also applies to
this method with minimal changes.
Parameters
----------
self
Input volume *[...,d_in,...]*,
where d_in indicates the dimension that needs IFFT.
dim
The dimension along which to take the one dimensional IFFT.
norm
Optional argument, "backward", "ortho" or "forward". Defaults to be
"backward".
"backward" indicates no normalization.
"ortho" indicates normalization by 1/sqrt(n).
"forward" indicates normalization by 1/n.
n
Optional argument indicating the sequence length, if given, the input
would be padded with zero or truncated to length n before performing IFFT.
Should be a integer greater than 1.
out
Optional output array, for writing the result to. It must have a shape that
the inputs broadcast to.
Returns
-------
ret
Array containing the transformed input.
Examples
--------
>>> a = ivy.array((np.exp(2j * np.pi * np.arange(8) / 8)))
>>> a.ifft(0)
ivy.array([-4.30636606e-17+1.43029718e-18j, 0.00000000e+00+1.53080850e-17j,
1.43029718e-18+1.53080850e-17j, 0.00000000e+00+9.58689626e-18j,
1.24474906e-17+2.91858728e-17j, 0.00000000e+00+1.53080850e-17j,
2.91858728e-17+1.53080850e-17j, 1.00000000e+00-1.01435406e-16j])
"""
return ivy.ifft(
self._data,
dim,
norm=norm,
n=n,
out=out,
)
def embedding(
self: ivy.Array,
indices: ivy.Array,
/,
*,
max_norm: Optional[int] = None,
out: Optional[ivy.Array] = None,
) -> ivy.Array:
return ivy.embedding(self._data, indices, max_norm=max_norm, out=out)
def dft(
self,
/,
*,
axis: int = 1,
inverse: bool = False,
onesided: bool = False,
dft_length: Optional[Union[int, Tuple[int]]] = None,
norm: str = "backward",
out: Optional[ivy.Array] = None,
) -> ivy.Array:
"""Compute the discrete Fourier transform of input.
Parameters
----------
self
Input volume *[...,d_in,...]*,
where d_in indicates the dimension that needs FFT.
axis
The axis on which to perform the DFT. By default this
value is set to 1, which corresponds to the first dimension
after the batch index.
inverse
Whether to perform the inverse discrete fourier transform.
By default this value is set to False.
onesided
If onesided is True, only values for w in [0, 1, 2, …, floor(n_fft/2) + 1]
are returned because the real-to-complex Fourier transform satisfies the
conjugate symmetry, i.e., X[m, w] = X[m,w]=X[m,n_fft-w]*. Note if the
input or window tensors are complex, then onesided output is not possible.
Enabling onesided with real inputs performs a Real-valued fast Fourier
transform (RFFT). When invoked with real or complex valued input, the
default value is False. Values can be True or False.
dft_length
The length of the signal.If greater than the axis dimension,
the signal will be zero-padded up to dft_length. If less than
the axis dimension, only the first dft_length values will be
used as the signal. It’s an optional value.
norm
Optional argument, "backward", "ortho" or "forward". Defaults to be
"backward".
"backward" indicates no normalization.
"ortho" indicates normalization by 1/sqrt(n).
"forward" indicates normalization by 1/n.
out
Optional output array, for writing the result to. It must
have a shape that the inputs broadcast to.
Returns
-------
ret
The Fourier Transform of the input vector.If onesided is False,
the following shape is expected: [batch_idx][signal_dim1][signal_dim2]
…[signal_dimN][2]. If axis=0 and onesided is True, the following shape
is expected: [batch_idx][floor(signal_dim1/2)+1][signal_dim2]
…[signal_dimN][2]. If axis=1 and onesided is True, the following
shape is expected: [batch_idx][signal_dim1] [floor(signal_dim2/2)+1]
…[signal_dimN][2]. If axis=N-1 and onesided is True, the following
shape is expected: [batch_idx][signal_dim1][signal_dim2]…
[floor(signal_dimN/2)+1][2]. The signal_dim at the specified axis
is equal to the dft_length.
"""
return ivy.dft(
self._data,
axis=axis,
inverse=inverse,
onesided=onesided,
dft_length=dft_length,
norm=norm,
out=out,
)
def interpolate(
self,
size: Union[Sequence[int], int],
/,
*,
mode: Union[
Literal[
"linear",
"bilinear",
"trilinear",
"nearest",
"area",
"nearest_exact",
"tf_area",
"bicubic",
]
] = "linear",
scale_factor: Optional[Union[Sequence[int], int]] = None,
recompute_scale_factor: Optional[bool] = None,
align_corners: Optional[bool] = None,
antialias: bool = False,
out: Optional[ivy.Array] = None,
) -> ivy.Array:
"""Down/up samples the input to the given size. The algorithm used for
interpolation is determined by mode.
Parameters
----------
self
Input array, Must have the shape
[batch x channels x [optional depth] x [optional height] x width].
size
Output size.
mode
Interpolation mode. Can be one of the following:
- linear
- bilinear
- trilinear
- nearest
- area
- tf_area
- bicubic
- mitchellcubic
- lanczos3
- lanczos5
- gaussian
scale_factor
Multiplier for spatial size that defines the output size
(overwriting `size`).
align_corners
If True, the corner pixels of the input and output tensors are aligned,
and thus preserving the values at the corner pixels. If False, the corner
pixels are not aligned, and the interpolation uses edge value padding for
out-of-boundary values.
only has an effect when mode is 'linear', 'bilinear',
'bicubic' or 'trilinear'. Default: False
antialias
If True, antialiasing is applied when downsampling an image.
Supported modes: 'bilinear', 'bicubic'.
out
Optional output array, for writing the result to. It must
have a shape that the inputs broadcast to.
Returns
-------
resized array
"""
return ivy.interpolate(
self._data,
size,
mode=mode,
scale_factor=scale_factor,
recompute_scale_factor=recompute_scale_factor,
align_corners=align_corners,
antialias=antialias,
out=out,
)
def adaptive_avg_pool1d(
self: ivy.Array,
output_size: int,
) -> ivy.Array:
"""Apply a 1D adaptive average pooling over an input signal composed of
several input planes.
Parameters
----------
self
Input array. Must have shape (N, C, L_in) or (C, L_in) where N is
the batch dimension, C is the feature dimension, and L_in is the spatial
dimension.
output_size
Spatial output size.
Returns
-------
The result of the pooling operation. Will have shape (N, C, L_out) or
(C, L_out), where L_out = `output_size`
"""
return ivy.adaptive_avg_pool1d(
self._data,
output_size,
)
def adaptive_avg_pool2d(
self: ivy.Array,
output_size: Union[Sequence[int], int],
/,
*,
data_format: str = "NHWC",
) -> ivy.Array:
"""Apply a 2D adaptive average pooling over an input signal composed of
several input planes.
Parameters
----------
self
A 3D or 4D input array. Should have a floating-point data type.
output_size
Spatial output size.
data_format
"NHWC" or "NCHW". Defaults to "NHWC".
Returns
-------
The result of the pooling operation. Will have shape (N, C, S_0, S_1) or
(C, S_0, S_1), where S = `output_size`
"""
return ivy.adaptive_avg_pool2d(
self._data,
output_size,
data_format=data_format,
)
def adaptive_max_pool2d(
self: ivy.Array,
output_size: Union[Sequence[int], int],
) -> ivy.Array:
"""Apply a 2D adaptive maximum pooling over an input signal composed of
several input planes.
Parameters
----------
self
Input array. Must have shape (N, C, H_in, W_in) or (C, H_in, W_in) where N
is the batch dimension, C is the feature dimension, and H_in and W_in are
the 2 spatial dimensions.
output_size
Spatial output size.
Returns
-------
The result of the pooling operation. Will have shape (N, C, S_0, S_1) or
(C, S_0, S_1), where S = `output_size`
"""
return ivy.adaptive_max_pool2d(
self._data,
output_size,
)
def adaptive_max_pool3d(
self: ivy.Array,
output_size: Union[Sequence[int], int],
) -> ivy.Array:
return ivy.adaptive_max_pool3d(
self._data,
output_size,
)
def reduce_window(
self: ivy.Array,
init_value: Union[int, float],
computation: Callable,
window_dimensions: Union[int, Sequence[int]],
/,
*,
window_strides: Union[int, Sequence[int]] = 1,
padding: Union[str, int, Sequence[Tuple[int, int]]] = "VALID",
base_dilation: Union[int, Sequence[int]] = 1,
window_dilation: Union[int, Sequence[int]] = 1,
) -> ivy.Array:
"""Apply a reduction function to all elements in each window of an
array.
Parameters
----------
self
An array representing the base area on which the window is going to slide
over.
init_value
The starting value for the reduction.
computation
The reduction function to apply to elements in each window.
window_dimensions
A sequence containing the window dimensions.
window_strides
A sequence containing the window strides.
padding
Either the string ‘SAME’ (padding with zeros evenly), the string ‘VALID’ (no
padding), or a sequence of n (low, high) integer pairs that give the padding
to apply before and after each spatial dimension.
base_dilation
A sequence containing the base dilation values.
window_dilation
A sequence containing the window dilation values.
Returns
-------
ret
The result of the pooling-like operation.
Examples
--------
>>> x = ivy.array([[1, 2, 3, 4],
>>> [5, 6, 7, 8],
>>> [9, 10, 11, 12]])
>>> x.reduce_window(0, ivy.sum, (2, 2))
ivy.array([[32.]])
"""
return ivy.reduce_window(
self._data,
init_value,
computation,
window_dimensions,
window_strides=window_strides,
padding=padding,
base_dilation=base_dilation,
window_dilation=window_dilation,
)
def fft2(
self: ivy.Array,
*,
s: Optional[Sequence[int]] = None,
dim: Sequence[int] = (-2, -1),
norm: str = "backward",
out: Optional[ivy.Array] = None,
) -> ivy.Array:
"""Compute the 2-dimensional discrete Fourier Transform.
Parameters
----------
x
Input volume *[...,d_in,...]*,
where d_in indicates the dimension that needs FFT2.
s
sequence of ints, optional
Shape (length of each transformed axis) of the output (s[0] refers
to axis 0, s[1] to axis 1, etc.). This corresponds to n for fft(x, n).
Along each axis, if the given shape is smaller than that of the input,
the input is cropped. If it is larger, the input is padded with zeros.
If s is not given, the shape of the input along the axes specified by
axes is used.
dim
Axes over which to compute the FFT2. If not given, the last two axes are
used. A repeated index in axes means the transform over that axis is
performed multiple times. A one-element sequence means that a
one-dimensional FFT is performed.
norm
Optional argument, "backward", "ortho" or "forward". Defaults to be
"backward".
"backward" indicates no normalization.
"ortho" indicates normalization by 1/sqrt(n).
"forward" indicates normalization by 1/n.
out
Optional output array, for writing the result to. It must have a shape that
the inputs broadcast to.
Returns
-------
ret
The result of the FFT2 operation.
Examples
--------
>>> a = ivy.array([[0, 0, 0, 0, 0],
[1, 1, 1, 1, 1],
[2, 2, 2, 2, 2],
[3, 3, 3, 3, 3],
[4, 4, 4, 4, 4]])
>>> ivy.fft2(a)
array([[ 50. +0.j , 0. +0.j , 0. +0.j , # may vary
0. +0.j , 0. +0.j ],
[-12.5+17.20477401j, 0. +0.j , 0. +0.j ,
0. +0.j , 0. +0.j ],
[-12.5 +4.0614962j , 0. +0.j , 0. +0.j ,
0. +0.j , 0. +0.j ],
[-12.5 -4.0614962j , 0. +0.j , 0. +0.j ,
0. +0.j , 0. +0.j ],
[-12.5-17.20477401j, 0. +0.j , 0. +0.j ,
0. +0.j , 0. +0.j ]])
"""
return ivy.fft2(self._data, s=s, dim=dim, norm=norm, out=out)
def ifftn(
self: ivy.Array,
s: Optional[Union[int, Tuple[int, ...]]] = None,
axes: Optional[Union[int, Tuple[int, ...]]] = None,
*,
norm: str = "backward",
out: Optional[ivy.Array] = None,
) -> ivy.Array:
"""Compute the N-dimensional inverse discrete Fourier Transform.
Parameters
----------
x
Input array of complex numbers.
s
sequence of ints, optional
Shape (length of transformed axis) of the output (`s[0]` refers to axis 0,
`s[1]` to axis 1, etc.). If given shape is smaller than that of the input,
the input is cropped. If larger, input is padded with zeros. If `s` is not
given, shape of input along axes specified by axes is used.
axes
axes over which to compute the IFFT. If not given, last `len(s)` axes are
used, or all axes if `s` is also not specified. Repeated indices in axes
means inverse transform over that axis is performed multiple times.
norm
Optional argument, "backward", "ortho" or "forward". Defaults to be
"backward".
"backward" indicates no normalization.
"ortho" indicates normalization by 1/sqrt(n).
"forward" indicates normalization by 1/n.
out
Optional output array, for writing the result to. It must have a shape that
the inputs broadcast to.
Returns
-------
ret
The truncated or zero-padded input, transformed along the axes indicated
by axes, or by a combination of s or x, as explained in the parameters
section above.
Examples
--------
>>> x = ivy.array([[0.24730653+0.90832391j, 0.49495562+0.9039565j,
... 0.98193269+0.49560517j],
... [0.93280757+0.48075343j, 0.28526384+0.3351205j,
... 0.2343787 +0.83528011j],
... [0.18791352+0.30690572j, 0.82115787+0.96195183j,
... 0.44719226+0.72654048j]])
>>> y = ivy.ifftn(x)
>>> print(y)
ivy.array([[ 0.51476765+0.66160417j, -0.04319742-0.05411636j,
-0.015561 -0.04216015j],
[ 0.06310689+0.05347854j, -0.13392983+0.16052352j,
-0.08371392+0.17252843j],
[-0.0031429 +0.05421245j, -0.10446617-0.17747098j,
0.05344324+0.07972424j]])
>>> x = ivy.array([[0.24730653+0.90832391j, 0.49495562+0.9039565j,
... 0.98193269+0.49560517j],
... [0.93280757+0.48075343j, 0.28526384+0.3351205j,
... 0.2343787 +0.83528011j],
... [0.18791352+0.30690572j, 0.82115787+0.96195183j,
... 0.44719226+0.72654048j]])
>>> y = ivy.ifftn(x, s=[2, 1], axes=[0, 1], norm='ortho')
>>> print(y)
ivy.array([[ 0.8344667 +0.98222595j],
[-0.48472244+0.30233797j]])
"""
return ivy.ifftn(self._data, s=s, axes=axes, norm=norm, out=out)
def rfft(
self: ivy.Array,
/,
*,
n: Optional[int] = None,
axis: int = -1,
norm: Literal["backward", "ortho", "forward"] = "backward",
out: Optional[ivy.Array] = None,
) -> ivy.Array:
"""ivy.Array instance method variant of ivy.rfft. This method simply
wraps the function, and so the docstring for ivy.rfft also applies to
this method with minimal changes.
Parameters
----------
self
input array. Must have a real-valued floating-point data type.
n
length of the transformed axis of the input. If
- n is greater than the length of the input array, the input array
is zero-padded to length n.
- n is less than the length of the input array, the input array is
trimmed to length n.
- n is not provided, the length of the transformed axis of the
output must equal the length of the input along the axis specified
by axis. Default is ``None``.
axis
axis (dimension) over which to compute the Fourier transform.
If not set, the last axis (dimension) is used. Default is ``-1``.
norm
normalization mode. Should be one of the following modes:
- 'backward': no normalization.
- 'ortho': normalize by 1/sqrt(n) (i.e., make the FFT orthonormal).
- 'forward': normalize by 1/n.
Default is ``backward``.
out
Optional output array, for writing the result to. It must
have a shape that the inputs broadcast to.
Returns
-------
ret
an array transformed along the axis (dimension) indicated by axis.
The returned array must have a complex-valued floating-point
data type determined by Type Promotion Rules.
Examples
--------
>>> x = ivy.array([0,1,2])
>>> y = x.rfft()
>>> print(y)
ivy.array([ 3. +0.j , -1.5+0.8660254j])
"""
return ivy.rfft(self, n=n, axis=axis, norm=norm, out=out)
def rfftn(
self: ivy.Array,
s: Optional[Sequence[int]] = None,
axes: Optional[Sequence[int]] = None,
*,
norm: str = "backward",
out: Optional[ivy.Array] = None,
) -> ivy.Array:
"""Compute the n-dimensional discrete Fourier Transform.
Parameters
----------
self
Input array.
s
Shape (length of each transformed axis) of the output.
axes
Axes over which to compute the RFFT. If not given, the last len(s) axes are
used.
norm
Normalization mode: "backward", "ortho", or "forward".
out
Optional output array for writing the result.
Returns
-------
ret
The result of the RFFT operation.
"""
return ivy.rfftn(self._data, s=s, axes=axes, norm=norm, out=out)
def stft(
self: ivy.Array,
frame_length: int,
frame_step: int,
/,
*,
fft_length: Optional[int] = None,
window_fn: Optional[Callable] = None,
pad_end: Optional[bool] = False,
name: Optional[str] = None,
out: Optional[ivy.Array] = None,
) -> ivy.Array:
"""Compute the Short-time Fourier Transform of signals.
Parameters
----------
self
Input Arrays.
frame_length
An integer scalar Tensor. The window length in samples.
frame_step
An integer scalar Tensor. The number of samples to step.
fft_length
An integer scalar Tensor. The size of the FFT to apply.
If not provided, uses the smallest power of 2 enclosing frame_length.
window_fn
A callable that takes a window length and a dtype keyword
argument and returns a [window_length] Tensor of samples in the
provided datatype. If set to None, no windowing is used.
pad_end
Whether to pad the end of signals with zeros when the provided frame length
and step produces a frame that lies partially past its end.
name
An optional name for the operation.
out
Optional output array for writing the result.
Returns
-------
ret
A [..., frames, fft_unique_bins] Tensor of
complex64/complex128 STFT values where fft_unique_bins is
fft_length // 2 + 1 (the unique components of the FFT).
"""
return ivy.stft(
self._data,
frame_length,
frame_step,
fft_length=fft_length,
window_fn=window_fn,
pad_end=pad_end,
name=name,
out=out,
)
def sliding_window(
self: ivy.Array,
window_size: Union[int, Tuple[int, int], Tuple[int, int, int]],
/,
*,
stride: Union[int, Tuple[int, int]] = 1,
dilation: Union[int, Tuple[int, int]] = 1,
padding: Union[str, int, Sequence[Tuple[int, int]]] = "VALID",
) -> ivy.Array:
"""Slide a window of specified dimension over all elements of an array.
Parameters
----------
input
An array representing the base area on which the window is going to slide
over.
window_size
Size of the sliding window for each dimension of the input.
stride
The stride of the sliding window for each dimension of input
padding
Either the string ‘SAME’ (padding with zeros evenly), the string ‘VALID’
(no padding), or a sequence of n (low, high) integer pairs that give the
padding to apply before and after each spatial dimension.
dilation
The stride between elements within a sliding window, must be > 0.
Returns
-------
ret
The result of the sliding window operation.
Examples
--------
>>> x = ivy.array([[1, 2, 3, 4],
>>> [5, 6, 7, 8],
>>> [9, 10, 11, 12]])
>>> x.sliding_window((2, 2))
ivy.array([[[ 1, 2, 5, 6],
[ 2, 3, 6, 7],
[ 3, 4, 7, 8]],
[[ 5, 6, 9, 10],
[ 6, 7, 10, 11],
[ 7, 8, 11, 12]]])
"""
return ivy.sliding_window(
self._data,
window_size,
stride=stride,
dilation=dilation,
padding=padding,
)
def max_unpool1d(
self: ivy.Array,
indices: ivy.Array,
kernel_size: Union[Tuple[int], int],
/,
*,
strides: Optional[Union[int, Tuple[int]]] = None,
padding: Union[int, Tuple[int]] = 0,
data_format: Optional[str] = "NCW",
) -> ivy.Array:
"""Compute a 1-D max unpooling given the 1-D pooled input x and its
indices.
Parameters
----------
self
Pooled input image *[batch_size, w, d_in]*.
indices
Indices obtained from the corresponding max pooling operation.
kernel_size
Size of the kernel i.e., the sliding window for each
dimension of input. *[w]*.
strides
The stride of the sliding window for each dimension of input.
padding
SAME" or "VALID" indicating the algorithm, or list
indicating the per-dimension paddings.
data_format
NWC" or "NCW". Defaults to "NWC".
Returns
-------
ret
The result of the unpooling operation.
"""
return ivy.max_unpool1d(
self._data,
indices,
kernel_size,
strides=strides,
padding=padding,
data_format=data_format,
)
| ivy/ivy/data_classes/array/experimental/layers.py/0 | {
"file_path": "ivy/ivy/data_classes/array/experimental/layers.py",
"repo_id": "ivy",
"token_count": 22888
} | 9 |
# global
import abc
from typing import Optional, Union
# local
import ivy
class _ArrayWithLosses(abc.ABC):
def cross_entropy(
self: ivy.Array,
pred: Union[ivy.Array, ivy.NativeArray],
/,
*,
axis: int = -1,
epsilon: float = 1e-7,
reduction: str = "mean",
out: Optional[ivy.Array] = None,
) -> ivy.Array:
"""ivy.Array instance method variant of ivy.cross_entropy. This method
simply wraps the function, and so the docstring for ivy.cross_entropy
also applies to this method with minimal changes.
Parameters
----------
self
input array containing true labels.
pred
input array containing the predicted labels.
axis
the axis along which to compute the cross-entropy. If axis is ``-1``,
the cross-entropy will be computed along the last dimension.
Default: ``-1``.
epsilon
a float in [0.0, 1.0] specifying the amount of smoothing when calculating
the loss. If epsilon is ``0``, no smoothing will be applied.
Default: ``1e-7``.
out
optional output array, for writing the result to. It must have a shape
that the inputs broadcast to.
Returns
-------
ret
The cross-entropy loss between the given distributions.
Examples
--------
>>> x = ivy.array([0, 0, 1, 0])
>>> y = ivy.array([0.25, 0.25, 0.25, 0.25])
>>> z = x.cross_entropy(y)
>>> print(z)
ivy.array(0.34657359)
"""
return ivy.cross_entropy(
self._data, pred, axis=axis, epsilon=epsilon, reduction=reduction, out=out
)
def binary_cross_entropy(
self: ivy.Array,
pred: Union[ivy.Array, ivy.NativeArray],
/,
*,
from_logits: bool = False,
epsilon: float = 0.0,
reduction: str = "mean",
pos_weight: Optional[Union[ivy.Array, ivy.NativeArray]] = None,
axis: Optional[int] = None,
out: Optional[ivy.Array] = None,
) -> ivy.Array:
"""ivy.Array instance method variant of ivy.binary_cross_entropy. This
method simply wraps the function, and so the docstring for
ivy.binary_cross_entropy also applies to this method with minimal
changes.
Parameters
----------
self
input array containing true labels.
pred
input array containing Predicted labels.
from_logits
Whether `pred` is expected to be a logits tensor. By
default, we assume that `pred` encodes a probability distribution.
epsilon
a float in [0.0, 1.0] specifying the amount of smoothing when calculating
the loss. If epsilon is ``0``, no smoothing will be applied. Default: ``0``.
reduction
``'none'``: No reduction will be applied to the output.
``'mean'``: The output will be averaged.
``'sum'``: The output will be summed. Default: ``'none'``.
pos_weight
a weight for positive examples. Must be an array with length equal
to the number of classes.
axis
Axis along which to compute crossentropy.
out
optional output array, for writing the result to. It must have a shape
that the inputs broadcast to.
Returns
-------
ret
The binary cross entropy between the given distributions.
Examples
--------
>>> x = ivy.array([1 , 1, 0])
>>> y = ivy.array([0.7, 0.8, 0.2])
>>> z = x.binary_cross_entropy(y)
>>> print(z)
ivy.array(0.26765382)
"""
return ivy.binary_cross_entropy(
self._data,
pred,
from_logits=from_logits,
epsilon=epsilon,
reduction=reduction,
pos_weight=pos_weight,
axis=axis,
out=out,
)
def sparse_cross_entropy(
self: ivy.Array,
pred: Union[ivy.Array, ivy.NativeArray],
/,
*,
axis: int = -1,
epsilon: float = 1e-7,
reduction: str = "mean",
out: Optional[ivy.Array] = None,
) -> ivy.Array:
"""ivy.Array instance method variant of ivy.sparse_cross_entropy. This
method simply wraps the function, and so the docstring for
ivy.sparse_cross_entropy also applies to this method with minimal
changes.
Parameters
----------
self
input array containing the true labels as logits.
pred
input array containing the predicted labels as logits.
axis
the axis along which to compute the cross-entropy. If axis is ``-1``, the
cross-entropy will be computed along the last dimension. Default: ``-1``.
epsilon
a float in [0.0, 1.0] specifying the amount of smoothing when calculating
the loss. If epsilon is ``0``, no smoothing will be applied.
Default: ``1e-7``.
epsilon
a float in [0.0, 1.0] specifying the amount of smoothing when calculating
the loss. If epsilon is ``0``, no smoothing will be applied. Default:
``1e-7``.
out
optional output array, for writing the result to. It must have a shape
that the inputs broadcast to.
Returns
-------
ret
The sparse cross-entropy loss between the given distributions.
Examples
--------
>>> x = ivy.array([1 , 1, 0])
>>> y = ivy.array([0.7, 0.8, 0.2])
>>> z = x.sparse_cross_entropy(y)
>>> print(z)
ivy.array([0.07438118, 0.07438118, 0.11889165])
"""
return ivy.sparse_cross_entropy(
self._data, pred, axis=axis, epsilon=epsilon, reduction=reduction, out=out
)
| ivy/ivy/data_classes/array/losses.py/0 | {
"file_path": "ivy/ivy/data_classes/array/losses.py",
"repo_id": "ivy",
"token_count": 2787
} | 10 |
# global
from typing import Optional, Union, List, Dict, Tuple, Callable
# local
import ivy
from ivy.data_classes.container.base import ContainerBase
# ToDo: implement all methods here as public instance methods
# noinspection PyMissingConstructor
class _ContainerWithDataTypes(ContainerBase):
@staticmethod
def _static_astype(
x: ivy.Container,
dtype: Union[ivy.Dtype, ivy.Container],
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
*,
copy: Union[bool, ivy.Container] = True,
out: Optional[ivy.Container] = None,
) -> ivy.Container:
"""Copy an array to a specified data type irrespective of :ref:`type-
promotion` rules.
.. note::
Casting floating-point ``NaN`` and ``infinity`` values to integral data types
is not specified and is implementation-dependent.
.. note::
When casting a boolean input array to a numeric data type, a value of ``True``
must cast to a numeric value equal to ``1``, and a value of ``False`` must cast
to a numeric value equal to ``0``.
When casting a numeric input array to ``bool``, a value of ``0`` must cast to
``False``, and a non-zero value must cast to ``True``.
Parameters
----------
x
array to cast.
dtype
desired data type.
copy
specifies whether to copy an array when the specified ``dtype`` matches
the data type of the input array ``x``. If ``True``, a newly allocated
array must always be returned. If ``False`` and the specified ``dtype``
matches the data type of the input array, the input array must be returned;
otherwise, a newly allocated must be returned. Default: ``True``.
out
optional output array, for writing the result to. It must have a shape
that the inputs broadcast to.
Returns
-------
ret
an array having the specified data type. The returned array must have
the same shape as ``x``.
Examples
--------
>>> c = ivy.Container(a=ivy.array([False,True,True]),
... b=ivy.array([3.14, 2.718, 1.618]))
>>> ivy.Container.static_astype(c, ivy.int32)
{
a: ivy.array([0, 1, 1]),
b: ivy.array([3, 2, 1])
}
"""
return ContainerBase.cont_multi_map_in_function(
"astype",
x,
dtype,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
copy=copy,
out=out,
)
def astype(
self: ivy.Container,
dtype: Union[ivy.Dtype, ivy.Container],
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
*,
copy: Union[bool, ivy.Container] = True,
out: Optional[ivy.Container] = None,
) -> ivy.Container:
"""Copy an array to a specified data type irrespective of :ref:`type-
promotion` rules.
.. note::
Casting floating-point ``NaN`` and ``infinity`` values to integral data types
is not specified and is implementation-dependent.
.. note::
When casting a boolean input array to a numeric data type, a value of ``True``
must cast to a numeric value equal to ``1``, and a value of ``False`` must cast
to a numeric value equal to ``0``.
When casting a numeric input array to ``bool``, a value of ``0`` must cast to
``False``, and a non-zero value must cast to ``True``.
Parameters
----------
self
array to cast.
dtype
desired data type.
copy
specifies whether to copy an array when the specified ``dtype`` matches
the data type of the input array ``x``. If ``True``, a newly allocated
array must always be returned. If ``False`` and the specified ``dtype``
matches the data type of the input array, the input array must be returned;
otherwise, a newly allocated must be returned. Default: ``True``.
out
optional output array, for writing the result to. It must have a shape
that the inputs broadcast to.
Returns
-------
ret
an array having the specified data type. The returned array must have
the same shape as ``x``.
Examples
--------
Using :class:`ivy.Container` instance method:
>>> x = ivy.Container(a=ivy.array([False,True,True]),
... b=ivy.array([3.14, 2.718, 1.618]))
>>> print(x.astype(ivy.int32))
{
a: ivy.array([0, 1, 1]),
b: ivy.array([3, 2, 1])
}
"""
return self._static_astype(
self,
dtype,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
copy=copy,
out=out,
)
@staticmethod
def _static_broadcast_arrays(
*arrays: Union[ivy.Container, ivy.Array, ivy.NativeArray],
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
) -> ivy.Container:
"""`ivy.Container` static method variant of `ivy.broadcast_arrays`.
This method simply wraps the function, and so the docstring for
`ivy.broadcast_arrays` also applies to this method with minimal
changes.
Parameters
----------
arrays
an arbitrary number of arrays to-be broadcasted.
Each array must have the same shape.
And Each array must have the same dtype as its
corresponding input array.
key_chains
The key-chains to apply or not apply the method to. Default is ``None``.
to_apply
If True, the method will be applied to key_chains, otherwise key_chains
will be skipped. Default is ``True``.
prune_unapplied
Whether to prune key_chains for which the function was not applied.
Default is ``False``.
map_sequences
Whether to also map method to sequences (lists, tuples).
Default is ``False``.
Returns
-------
ret
A list of containers containing broadcasted arrays
Examples
--------
With :class:`ivy.Container` inputs:
>>> x1 = ivy.Container(a=ivy.array([1, 2]), b=ivy.array([3, 4]))
>>> x2 = ivy.Container(a=ivy.array([-1.2, 0.4]), b=ivy.array([0, 1]))
>>> y = ivy.Container.static_broadcast_arrays(x1, x2)
>>> print(y)
[{
a: ivy.array([1, 2]),
b: ivy.array([3, 4])
}, {
a: ivy.array([-1.2, 0.4]),
b: ivy.array([0, 1])
}]
With mixed :class:`ivy.Container` and :class:`ivy.Array` inputs:
>>> x1 = ivy.Container(a=ivy.array([4, 5]), b=ivy.array([2, -1]))
>>> x2 = ivy.array([0.2, 3.])
>>> y = ivy.Container.static_broadcast_arrays(x1, x2)
>>> print(y)
[{
a: ivy.array([4, 5]),
b: ivy.array([2, -1])
}, {
a: ivy.array([0.2, 3.]),
b: ivy.array([0.2, 3.])
}]
"""
return ContainerBase.cont_multi_map_in_function(
"broadcast_arrays",
*arrays,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
)
def broadcast_arrays(
self: ivy.Container,
*arrays: Union[ivy.Container, ivy.Array, ivy.NativeArray],
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
) -> ivy.Container:
"""`ivy.Container` instance method variant of `ivy.broadcast_arrays`.
This method simply wraps the function, and so the docstring for
`ivy.broadcast_arrays` also applies to this method with minimal
changes.
Parameters
----------
self
A container to be broadcatsed against other input arrays.
arrays
an arbitrary number of containers having arrays to-be broadcasted.
Each array must have the same shape.
Each array must have the same dtype as its corresponding input array.
key_chains
The key-chains to apply or not apply the method to. Default is ``None``.
to_apply
If True, the method will be applied to key_chains, otherwise key_chains
will be skipped. Default is ``True``.
prune_unapplied
Whether to prune key_chains for which the function was not applied.
Default is ``False``.
map_sequences
Whether to also map method to sequences (lists, tuples).
Default is ``False``.
Examples
--------
With :class:`ivy.Container` inputs:
>>> x1 = ivy.Container(a=ivy.array([1, 2]), b=ivy.array([3, 4]))
>>> x2 = ivy.Container(a=ivy.array([-1.2, 0.4]), b=ivy.array([0, 1]))
>>> y = x1.broadcast_arrays(x2)
>>> print(y)
[{
a: ivy.array([1, 2]),
b: ivy.array([3, 4])
}, {
a: ivy.array([-1.2, 0.4]),
b: ivy.array([0, 1])
}]
With mixed :class:`ivy.Container` and :class:`ivy.Array` inputs:
>>> x1 = ivy.Container(a=ivy.array([4, 5]), b=ivy.array([2, -1]))
>>> x2 = ivy.zeros(2)
>>> y = x1.broadcast_arrays(x2)
>>> print(y)
[{
a: ivy.array([4, 5]),
b: ivy.array([2, -1])
}, {
a: ivy.array([0., 0.]),
b: ivy.array([0., 0.])
}]
"""
return self._static_broadcast_arrays(
self,
*arrays,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
)
@staticmethod
def _static_broadcast_to(
x: ivy.Container,
/,
shape: Union[Tuple[int, ...], ivy.Container],
*,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
out: Optional[ivy.Container] = None,
) -> ivy.Container:
"""`ivy.Container` static method variant of `ivy.broadcast_to`. This
method simply wraps the function, and so the docstring for
`ivy.broadcast_to` also applies to this method with minimal changes.
Parameters
----------
x
input array to be broadcasted.
shape
desired shape to be broadcasted to.
out
Optional array to store the broadcasted array.
Returns
-------
ret
Returns the broadcasted array of shape 'shape'
Examples
--------
With :class:`ivy.Container` static method:
>>> x = ivy.Container(a=ivy.array([1]),
... b=ivy.array([2]))
>>> y = ivy.Container.static_broadcast_to(x,(3, 1))
>>> print(y)
{
a: ivy.array([1],
[1],
[1]),
b: ivy.array([2],
[2],
[2])
}
"""
return ContainerBase.cont_multi_map_in_function(
"broadcast_to",
x,
shape,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
out=out,
)
def broadcast_to(
self: ivy.Container,
/,
shape: Union[Tuple[int, ...], ivy.Container],
*,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
out: Optional[ivy.Container] = None,
) -> ivy.Container:
"""`ivy.Container` instance method variant of `ivy.broadcast_to`. This
method simply wraps the function, and so the docstring for
`ivy.broadcast_to` also applies to this method with minimal changes.
Parameters
----------
self
input array to be broadcasted.
shape
desired shape to be broadcasted to.
out
Optional array to store the broadcasted array.
Returns
-------
ret
Returns the broadcasted array of shape 'shape'
Examples
--------
With :class:`ivy.Container` instance method:
>>> x = ivy.Container(a=ivy.array([0, 0.5]),
... b=ivy.array([4, 5]))
>>> y = x.broadcast_to((3,2))
>>> print(y)
{
a: ivy.array([[0., 0.5],
[0., 0.5],
[0., 0.5]]),
b: ivy.array([[4, 5],
[4, 5],
[4, 5]])
}
"""
return self._static_broadcast_to(
self,
shape,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
out=out,
)
@staticmethod
def _static_can_cast(
from_: ivy.Container,
to: Union[ivy.Dtype, ivy.Container],
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
) -> ivy.Container:
"""`ivy.Container` static method variant of `ivy.can_cast`. This method
simply wraps the function, and so the docstring for `ivy.can_cast` also
applies to this method with minimal changes.
Parameters
----------
from_
input container from which to cast.
to
desired data type.
key_chains
The key-chains to apply or not apply the method to. Default is ``None``.
to_apply
If True, the method will be applied to key_chains, otherwise key_chains
will be skipped. Default is ``True``.
prune_unapplied
Whether to prune key_chains for which the function was not applied.
Default is ``False``.
map_sequences
Whether to also map method to sequences (lists, tuples).
Default is ``False``.
Returns
-------
ret
``True`` if the cast can occur according to :ref:`type-promotion` rules;
otherwise, ``False``.
Examples
--------
>>> x = ivy.Container(a=ivy.array([0., 1., 2.]),
... b=ivy.array([3, 4, 5]))
>>> print(x.a.dtype, x.b.dtype)
float32 int32
>>> print(ivy.Container.static_can_cast(x, 'int64'))
{
a: false,
b: true
}
"""
return ContainerBase.cont_multi_map_in_function(
"can_cast",
from_,
to,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
)
def can_cast(
self: ivy.Container,
to: Union[ivy.Dtype, ivy.Container],
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
) -> ivy.Container:
"""`ivy.Container` instance method variant of `ivy.can_cast`. This
method simply wraps the function, and so the docstring for
`ivy.can_cast` also applies to this method with minimal changes.
Parameters
----------
self
input container from which to cast.
to
desired data type.
key_chains
The key-chains to apply or not apply the method to. Default is ``None``.
to_apply
If True, the method will be applied to key_chains, otherwise key_chains
will be skipped. Default is ``True``.
prune_unapplied
Whether to prune key_chains for which the function was not applied.
Default is ``False``.
map_sequences
Whether to also map method to sequences (lists, tuples).
Default is ``False``.
Returns
-------
ret
``True`` if the cast can occur according to :ref:`type-promotion` rules;
otherwise, ``False``.
Examples
--------
>>> x = ivy.Container(a=ivy.array([0., 1., 2.]),
... b=ivy.array([3, 4, 5]))
>>> print(x.a.dtype, x.b.dtype)
float32 int32
>>> print(x.can_cast('int64'))
{
a: False,
b: True
}
"""
return self._static_can_cast(
self, to, key_chains, to_apply, prune_unapplied, map_sequences
)
@staticmethod
def _static_dtype(
x: ivy.Container,
*,
as_native: Union[bool, ivy.Container] = False,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
out: Optional[ivy.Container] = None,
) -> ivy.Container:
return ContainerBase.cont_multi_map_in_function(
"dtype",
x,
as_native=as_native,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
out=out,
)
def dtype(
self: ivy.Container,
*,
as_native: Union[bool, ivy.Container] = False,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
out: Optional[ivy.Container] = None,
) -> ivy.Container:
"""
Examples
--------
>>> x = ivy.Container(a=ivy.array([1, 2, 3]), b=ivy.array([2, 3, 4]))
>>> y = x.dtype()
>>> print(y)
{
a: int32,
b: int32
}
"""
return self._static_dtype(
self,
as_native=as_native,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
out=out,
)
@staticmethod
def _static_default_float_dtype(
*,
input: Optional[Union[ivy.Array, ivy.NativeArray, ivy.Container]] = None,
float_dtype: Optional[
Union[ivy.FloatDtype, ivy.NativeDtype, ivy.Container]
] = None,
as_native: Optional[Union[bool, ivy.Container]] = None,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
) -> ivy.Container:
return ContainerBase.cont_multi_map_in_function(
"default_float_dtype",
input=input,
float_dtype=float_dtype,
as_native=as_native,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
)
@staticmethod
def _static_default_complex_dtype(
*,
input: Optional[Union[ivy.Array, ivy.NativeArray, ivy.Container]] = None,
complex_dtype: Optional[
Union[ivy.FloatDtype, ivy.NativeDtype, ivy.Container]
] = None,
as_native: Optional[Union[bool, ivy.Container]] = None,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
) -> ivy.Container:
return ContainerBase.cont_multi_map_in_function(
"default_complex_dtype",
input=input,
complex_dtype=complex_dtype,
as_native=as_native,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
)
@staticmethod
def _static_function_supported_dtypes(
fn: Union[Callable, ivy.Container],
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
) -> ivy.Container:
return ContainerBase.cont_multi_map_in_function(
"function_supported_dtypes",
fn,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
)
@staticmethod
def _static_function_unsupported_dtypes(
fn: Union[Callable, ivy.Container],
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
) -> ivy.Container:
return ContainerBase.cont_multi_map_in_function(
"function_unsupported_dtypes",
fn,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
)
@staticmethod
def _static_finfo(
type: ivy.Container,
/,
*,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
) -> ivy.Container:
"""`ivy.Container` static method variant of `ivy.finfo`.
Parameters
----------
type
input container with leaves to inquire information about.
Returns
-------
ret
container of the same structure as `self`, with each element
as a finfo object for the corresponding dtype of
leave in`self`.
Examples
--------
>>> c = ivy.Container(x=ivy.array([-9.5,1.8,-8.9], dtype=ivy.float16),
... y=ivy.array([7.6,8.1,1.6], dtype=ivy.float64))
>>> y = ivy.Container.static_finfo(c)
>>> print(y)
{
x: finfo(resolution=0.001, min=-6.55040e+04, max=6.55040e+04,\
dtype=float16),
y: finfo(resolution=1e-15, min=-1.7976931348623157e+308, \
max=1.7976931348623157e+308, dtype=float64)
}
"""
return ContainerBase.cont_multi_map_in_function(
"finfo",
type,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
)
def finfo(
self: ivy.Container,
/,
*,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
) -> ivy.Container:
"""`ivy.Container` instance method variant of `ivy.finfo`.
Parameters
----------
self
input container with leaves to inquire information about.
Returns
-------
ret
container of the same structure as `self`, with each element
as a finfo object for the corresponding dtype of
leave in`self`.
Examples
--------
>>> c = ivy.Container(x=ivy.array([-9.5,1.8,-8.9], dtype=ivy.float16),
... y=ivy.array([7.6,8.1,1.6], dtype=ivy.float64))
>>> print(c.finfo())
{
x: finfo(resolution=0.001, min=-6.55040e+04, max=6.55040e+04,\
dtype=float16),
y: finfo(resolution=1e-15, min=-1.7976931348623157e+308, \
max=1.7976931348623157e+308, dtype=float64)
}
"""
return self._static_finfo(
self,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
)
@staticmethod
def _static_iinfo(
type: ivy.Container,
/,
*,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
) -> ivy.Container:
"""`ivy.Container` static method variant of `ivy.iinfo`. This method
simply wraps the function, and so the docstring for `ivy.iinfo` also
applies to this method with minimal changes.
Parameters
----------
type
input container with leaves to inquire information about.
key_chains
The key-chains to apply or not apply the method to.
Default is ``None``.
to_apply
Boolean indicating whether to apply the
method to the key-chains. Default is ``True``.
prune_unapplied
Boolean indicating whether to prune the
key-chains that were not applied. Default is ``False``.
map_sequences
Boolean indicating whether to map method
to sequences (list, tuple). Default is ``False``.
Returns
-------
ret
container of the same structure as `type`, with each element
as an iinfo object for the corresponding dtype of
leave in`type`.
Examples
--------
>>> c = ivy.Container(x=ivy.array([12,-1800,1084], dtype=ivy.int16),
... y=ivy.array([-40000,99,1], dtype=ivy.int32))
>>> y = ivy.Container.static_iinfo(c)
>>> print(y)
{
x: iinfo(min=-32768, max=32767, dtype=int16),
y: iinfo(min=-2147483648, max=2147483647, dtype=int32)
}
"""
return ContainerBase.cont_multi_map_in_function(
"iinfo",
type,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
)
def iinfo(
self: ivy.Container,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
) -> ivy.Container:
"""`ivy.Container` instance method variant of `ivy.iinfo`. This method
simply wraps the function, and so the docstring for `ivy.iinfo` also
applies to this method with minimal changes.
Parameters
----------
self
input container with leaves to inquire information about.
key_chains
The key-chains to apply or not apply the method to.
Default is ``None``.
to_apply
Boolean indicating whether to apply the
method to the key-chains. Default is ``True``.
prune_unapplied
Boolean indicating whether to prune the
key-chains that were not applied. Default is ``False``.
map_sequences
Boolean indicating whether to map method
to sequences (list, tuple). Default is ``False``.
Returns
-------
ret
container of the same structure as `self`, with each element
as an iinfo object for the corresponding dtype of
leave in`self`.
Examples
--------
>>> c = ivy.Container(x=ivy.array([-9,1800,89], dtype=ivy.int16),
... y=ivy.array([76,-81,16], dtype=ivy.int32))
>>> c.iinfo()
{
x: iinfo(min=-32768, max=32767, dtype=int16),
y: iinfo(min=-2147483648, max=2147483647, dtype=int32)
}
>>> c = ivy.Container(x=ivy.array([-12,123,4], dtype=ivy.int8),
... y=ivy.array([76,-81,16], dtype=ivy.int16))
>>> c.iinfo()
{
x: iinfo(min=-128, max=127, dtype=int8),
y: iinfo(min=-32768, max=32767, dtype=int16)
}
"""
return self._static_iinfo(
self,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
)
@staticmethod
def _static_is_bool_dtype(
dtype_in: ivy.Container,
/,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
) -> ivy.Container:
return ContainerBase.cont_multi_map_in_function(
"is_bool_dtype",
dtype_in,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
)
def is_bool_dtype(
self: ivy.Container,
/,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
) -> ivy.Container:
return self._static_is_bool_dtype(
self,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
)
@staticmethod
def _static_is_float_dtype(
dtype_in: ivy.Container,
/,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
) -> ivy.Container:
"""`ivy.Container` static method variant of `is_float_dtype`. This
method simply wraps this function, so the docstring of `is_float_dtype`
roughly applies to this method.
Parameters
----------
dtype_in : ivy.Container
The input to check for float dtype.
key_chains : Optional[Union[List[str], Dict[str, str]]]
The key chains to use when mapping over the input.
to_apply : bool
Whether to apply the mapping over the input.
prune_unapplied : bool
Whether to prune the keys that were not applied.
map_sequences : bool
Boolean indicating whether to map method
to sequences (list, tuple). Default is ``False``.
Returns
-------
ret : bool
Boolean indicating whether the input has float dtype.
Examples
--------
>>> x = ivy.static_is_float_dtype(ivy.float32)
>>> print(x)
True
>>> x = ivy.static_is_float_dtype(ivy.int64)
>>> print(x)
False
>>> x = ivy.static_is_float_dtype(ivy.int32)
>>> print(x)
False
>>> x = ivy.static_is_float_dtype(ivy.bool)
>>> print(x)
False
>>> arr = ivy.array([1.2, 3.2, 4.3], dtype=ivy.float32)
>>> print(arr.is_float_dtype())
True
>>> x = ivy.Container(a=ivy.array([0., 1., 2.]), b=ivy.array([3, 4, 5]))
>>> print(x.a.dtype, x.b.dtype)
float32 int32
"""
return ContainerBase.cont_multi_map_in_function(
"is_float_dtype",
dtype_in,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
)
def is_float_dtype(
self: ivy.Container,
/,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
) -> ivy.Container:
"""`ivy.Container` instance method variant of `ivy.is_float_dtype`.
This method simply wraps the function, and so the docstring for
`ivy.is_float_dtype` also applies to this method with minimal changes.
Parameters
----------
self : ivy.Container
The `ivy.Container` instance to call `ivy.is_float_dtype` on.
key_chains : Union[List[str], Dict[str, str]]
The key-chains to apply or not apply the method to.
Default is ``None``.
to_apply : bool
Boolean indicating whether to apply the
method to the key-chains. Default is ``False``.
prune_unapplied : bool
Boolean indicating whether to prune the
key-chains that were not applied. Default is ``False``.
map_sequences : bool
Boolean indicating whether to map method
to sequences (list, tuple). Default is ``False``.
Returns
-------
ret : bool
Boolean of whether the input is of a float dtype.
Examples
--------
>>> x = ivy.is_float_dtype(ivy.float32)
>>> print(x)
True
>>> x = ivy.is_float_dtype(ivy.int64)
>>> print(x)
False
>>> x = ivy.is_float_dtype(ivy.int32)
>>> print(x)
False
>>> x = ivy.is_float_dtype(ivy.bool)
>>> print(x)
False
>>> arr = ivy.array([1.2, 3.2, 4.3], dtype=ivy.float32)
>>> print(arr.is_float_dtype())
True
>>> x = ivy.Container(a=ivy.array([0., 1., 2.]), b=ivy.array([3, 4, 5]))
>>> print(x.a.dtype, x.b.dtype)
float32 int32
"""
return self._static_is_float_dtype(
self,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
)
@staticmethod
def _static_is_int_dtype(
dtype_in: ivy.Container,
/,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
) -> ivy.Container:
return ContainerBase.cont_multi_map_in_function(
"is_int_dtype",
dtype_in,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
)
def is_int_dtype(
self: ivy.Container,
/,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
) -> ivy.Container:
return self._static_is_int_dtype(
self,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
)
@staticmethod
def _static_is_uint_dtype(
dtype_in: ivy.Container,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
) -> ivy.Container:
return ContainerBase.cont_multi_map_in_function(
"is_uint_dtype",
dtype_in,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
)
def is_uint_dtype(
self: ivy.Container,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
) -> ivy.Container:
return self._static_is_uint_dtype(
self,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
)
@staticmethod
def _static_is_complex_dtype(
dtype_in: ivy.Container,
/,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
) -> ivy.Container:
"""`ivy.Container` static method variant of `is_complex_dtype`. This
method simply wraps this function, so the docstring of
`is_complex_dtype` roughly applies to this method.
Parameters
----------
dtype_in : ivy.Container
The input to check for complex dtype.
key_chains : Optional[Union[List[str], Dict[str, str]]]
The key chains to use when mapping over the input.
to_apply : bool
Whether to apply the mapping over the input.
prune_unapplied : bool
Whether to prune the keys that were not applied.
map_sequences : bool
Boolean indicating whether to map method
to sequences (list, tuple). Default is ``False``.
Returns
-------
ret : bool
Boolean indicating whether the input has float dtype.
Examples
--------
>>> x = ivy.Container.static_is_complex_dtype(ivy.complex64)
>>> print(x)
True
>>> x = ivy.Container.static_is_complex_dtype(ivy.int64)
>>> print(x)
False
>>> x = ivy.Container.static_is_complex_dtype(ivy.float32)
>>> print(x)
False
"""
return ContainerBase.cont_multi_map_in_function(
"is_complex_dtype",
dtype_in,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
)
def is_complex_dtype(
self: ivy.Container,
/,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
) -> ivy.Container:
"""`ivy.Container` instance method variant of `ivy.is_complex_dtype`.
This method simply wraps the function, and so the docstring for
`ivy.is_complex_dtype` also applies to this method with minimal
changes.
Parameters
----------
self : ivy.Container
The `ivy.Container` instance to call `ivy.is_complex_dtype` on.
key_chains : Union[List[str], Dict[str, str]]
The key-chains to apply or not apply the method to.
Default is ``None``.
to_apply : bool
Boolean indicating whether to apply the
method to the key-chains. Default is ``False``.
prune_unapplied : bool
Boolean indicating whether to prune the
key-chains that were not applied. Default is ``False``.
map_sequences : bool
Boolean indicating whether to map method
to sequences (list, tuple). Default is ``False``.
Returns
-------
ret : bool
Boolean of whether the input is of a complex dtype.
Examples
--------
>>> x = ivy.is_complex_dtype(ivy.complex64)
>>> print(x)
True
>>> x = ivy.is_complex_dtype(ivy.int64)
>>> print(x)
False
>>> x = ivy.is_complex_dtype(ivy.float32)
>>> print(x)
False
"""
return self._static_is_complex_dtype(
self,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
)
@staticmethod
def _static_result_type(
*arrays_and_dtypes: ivy.Container,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
) -> ivy.Container:
"""`ivy.Container` static method variant of `ivy.result_type`. This
method simply wraps the function, and so the docstring for
`ivy.result_type` also applies to this method with minimal changes.
Parameters
----------
arrays_and_dtypes
an arbitrary number of input arrays and/or dtypes.
key_chains
The key-chains to apply or not apply the method to. Default is ``None``.
to_apply
If True, the method will be applied to key_chains, otherwise key_chains
will be skipped. Default is ``True``.
prune_unapplied
Whether to prune key_chains for which the function was not applied.
Default is ``False``.
map_sequences
Whether to also map method to sequences (lists, tuples).
Default is ``False``.
Returns
-------
ret
the dtype resulting from an operation involving the input arrays and dtypes.
Examples
--------
>>> x = ivy.Container(a = ivy.array([0, 1, 2]),
... b = ivy.array([3., 4., 5.]))
>>> print(x.a.dtype, x.b.dtype)
int32 float32
>>> print(ivy.Container.static_result_type(x, ivy.float64))
{
a: float64,
b: float32
}
"""
return ContainerBase.cont_multi_map_in_function(
"result_type",
*arrays_and_dtypes,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
)
def result_type(
self: ivy.Container,
*arrays_and_dtypes: ivy.Container,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
) -> ivy.Container:
"""`ivy.Container` instance method variant of `ivy.result_type`. This
method simply wraps the function, and so the docstring for
`ivy.result_type` also applies to this method with minimal changes.
Parameters
----------
self
input container from which to cast.
arrays_and_dtypes
an arbitrary number of input arrays and/or dtypes.
key_chains
The key-chains to apply or not apply the method to. Default is ``None``.
to_apply
If True, the method will be applied to key_chains, otherwise key_chains
will be skipped. Default is ``True``.
prune_unapplied
Whether to prune key_chains for which the function was not applied.
Default is ``False``.
map_sequences
Whether to also map method to sequences (lists, tuples).
Default is ``False``.
Returns
-------
ret
the dtype resulting from an operation involving the input arrays and dtypes.
Examples
--------
>>> x = ivy.Container(a = ivy.array([3, 3, 3]))
>>> print(x.a.dtype)
int32
>>> y = ivy.Container(b = ivy.float64)
>>> print(x.result_type(y))
{
a: {
b: float64
}
}
"""
return self._static_result_type(
self,
*arrays_and_dtypes,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
)
| ivy/ivy/data_classes/container/data_type.py/0 | {
"file_path": "ivy/ivy/data_classes/container/data_type.py",
"repo_id": "ivy",
"token_count": 22747
} | 11 |
# global
from typing import (
Optional,
Union,
List,
Dict,
Sequence,
Tuple,
Literal,
Any,
Callable,
Iterable,
)
from numbers import Number
# local
import ivy
from ivy.data_classes.container.base import ContainerBase
class _ContainerWithManipulationExperimental(ContainerBase):
@staticmethod
def static_moveaxis(
a: Union[ivy.Array, ivy.NativeArray, ivy.Container],
source: Union[int, Sequence[int], ivy.Container],
destination: Union[int, Sequence[int], ivy.Container],
/,
*,
copy: Optional[Union[bool, ivy.Container]] = None,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
out: Optional[ivy.Container] = None,
) -> ivy.Container:
"""ivy.Container static method variant of ivy.moveaxis. This method
simply wraps the function, and so the docstring for ivy.moveaxis also
applies to this method with minimal changes.
Parameters
----------
a
The container with the arrays whose axes should be reordered.
source
Original positions of the axes to move. These must be unique.
destination
Destination positions for each of the original axes.
These must also be unique.
copy
boolean indicating whether or not to copy the input array.
If True, the function must always copy.
If False, the function must never copy and must
raise a ValueError in case a copy would be necessary.
If None, the function must reuse existing memory buffer if possible
and copy otherwise. Default: ``None``.
out
optional output container, for writing the result to.
Returns
-------
ret
Container including arrays with moved axes.
Examples
--------
With one :class:`ivy.Container` input:
>>> x = ivy.Container(a=ivy.zeros((3, 4, 5)), b=ivy.zeros((2,7,6)))
>>> ivy.Container.static_moveaxis(x, 0, -1).shape
{
a: (4, 5, 3)
b: (7, 6, 2)
}
"""
return ContainerBase.cont_multi_map_in_function(
"moveaxis",
a,
source,
destination,
copy=copy,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
out=out,
)
def moveaxis(
self: ivy.Container,
source: Union[int, Sequence[int], ivy.Container],
destination: Union[int, Sequence[int], ivy.Container],
/,
*,
copy: Optional[Union[bool, ivy.Container]] = None,
out: Optional[ivy.Container] = None,
) -> ivy.Container:
"""ivy.Container instance method variant of ivy.moveaxis. This method
simply wraps the function, and so the docstring for ivy.flatten also
applies to this method with minimal changes.
Parameters
----------
self
The container with the arrays whose axes should be reordered.
source
Original positions of the axes to move. These must be unique.
destination
Destination positions for each of the original axes.
These must also be unique.
copy
boolean indicating whether or not to copy the input array.
If True, the function must always copy.
If False, the function must never copy and must
raise a ValueError in case a copy would be necessary.
If None, the function must reuse existing memory buffer if possible
and copy otherwise. Default: ``None``.
out
optional output container, for writing the result to.
Returns
-------
ret
Container including arrays with moved axes.
Examples
--------
With one :class:`ivy.Container` input:
>>> x = ivy.Container(a=ivy.zeros((3, 4, 5)), b=ivy.zeros((2,7,6)))
>>> x.moveaxis(, 0, -1).shape
{
a: (4, 5, 3)
b: (7, 6, 2)
}
"""
return self.static_moveaxis(self, source, destination, copy=copy, out=out)
@staticmethod
def static_heaviside(
x1: Union[ivy.Array, ivy.NativeArray, ivy.Container],
x2: Union[ivy.Array, ivy.NativeArray, ivy.Container],
/,
*,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
out: Optional[ivy.Container] = None,
) -> ivy.Container:
"""ivy.Container static method variant of ivy.heaviside. This method
simply wraps the function, and so the docstring for ivy.heaviside also
applies to this method with minimal changes.
Parameters
----------
x1
input container including the arrays.
x2
values to use where the array is zero.
out
optional output container array, for writing the result to.
Returns
-------
ret
output container with element-wise Heaviside step function of each array.
Examples
--------
With :class:`ivy.Array` input:
>>> x1 = ivy.Container(a=ivy.array([-1.5, 0, 2.0]), b=ivy.array([3.0, 5.0])
>>> x2 = ivy.Container(a=0.5, b=[1.0, 2.0])
>>> ivy.Container.static_heaviside(x1, x2)
{
a: ivy.array([ 0. , 0.5, 1. ])
b: ivy.array([1.0, 1.0])
}
"""
return ContainerBase.cont_multi_map_in_function(
"heaviside",
x1,
x2,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
out=out,
)
def heaviside(
self: ivy.Container,
x2: ivy.Container,
/,
*,
out: Optional[ivy.Container] = None,
) -> ivy.Container:
"""ivy.Container instance method variant of ivy.heaviside. This method
simply wraps the function, and so the docstring for ivy.heaviside also
applies to this method with minimal changes.
Parameters
----------
self
input container including the arrays.
x2
values to use where the array is zero.
out
optional output container array, for writing the result to.
Returns
-------
ret
output container with element-wise Heaviside step function of each array.
Examples
--------
With :class:`ivy.Array` input:
>>> x1 = ivy.Container(a=ivy.array([-1.5, 0, 2.0]), b=ivy.array([3.0, 5.0])
>>> x2 = ivy.Container(a=0.5, b=[1.0, 2.0])
>>> x1.heaviside(x2)
{
a: ivy.array([ 0. , 0.5, 1. ])
b: ivy.array([1.0, 1.0])
}
"""
return self.static_heaviside(self, x2, out=out)
@staticmethod
def static_flipud(
m: Union[ivy.Array, ivy.NativeArray, ivy.Container],
/,
*,
copy: Optional[Union[bool, ivy.Container]] = None,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
out: Optional[ivy.Container] = None,
) -> ivy.Container:
"""ivy.Container static method variant of ivy.flipud. This method
simply wraps the function, and so the docstring for ivy.flipud also
applies to this method with minimal changes.
Parameters
----------
m
the container with arrays to be flipped.
copy
boolean indicating whether or not to copy the input array.
If True, the function must always copy.
If False, the function must never copy and must
raise a ValueError in case a copy would be necessary.
If None, the function must reuse existing memory buffer if possible
and copy otherwise. Default: ``None``.
out
optional output container, for writing the result to.
Returns
-------
ret
container including arrays corresponding to the input container's array
with elements order reversed along axis 0.
Examples
--------
With one :class:`ivy.Container` input:
>>> m = ivy.Container(a=ivy.diag([1, 2, 3]), b=ivy.arange(4))
>>> ivy.Container.static_flipud(m)
{
a: ivy.array(
[[ 0., 0., 3.],
[ 0., 2., 0.],
[ 1., 0., 0.]]
)
b: ivy.array([3, 2, 1, 0])
}
"""
return ContainerBase.cont_multi_map_in_function(
"flipud",
m,
copy=copy,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
out=out,
)
def flipud(
self: ivy.Container,
/,
*,
copy: Optional[Union[bool, ivy.Container]] = None,
out: Optional[ivy.Container] = None,
) -> ivy.Container:
"""ivy.Container instance method variant of ivy.flipud. This method
simply wraps the function, and so the docstring for ivy.flipud also
applies to this method with minimal changes.
Parameters
----------
self
the container with arrays to be flipped.
copy
boolean indicating whether or not to copy the input array.
If True, the function must always copy.
If False, the function must never copy and must
raise a ValueError in case a copy would be necessary.
If None, the function must reuse existing memory buffer if possible
and copy otherwise. Default: ``None``.
out
optional output container, for writing the result to.
Returns
-------
ret
container including arrays corresponding to the input container's array
with elements order reversed along axis 0.
Examples
--------
With one :class:`ivy.Container` input:
>>> m = ivy.Container(a=ivy.diag([1, 2, 3]), b=ivy.arange(4))
>>> m.flipud()
{
a: ivy.array(
[[ 0., 0., 3.],
[ 0., 2., 0.],
[ 1., 0., 0.]]
)
b: ivy.array([3, 2, 1, 0])
}
"""
return self.static_flipud(self, copy=copy, out=out)
def vstack(
self: ivy.Container,
/,
xs: Union[
Tuple[Union[ivy.Array, ivy.NativeArray, ivy.Container]],
List[Union[ivy.Array, ivy.NativeArray, ivy.Container]],
],
*,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
out: Optional[ivy.Container] = None,
) -> ivy.Container:
"""ivy.Container instance method variant of ivy.stack. This method
simply wraps the function, and so the docstring for ivy.stack also
applies to this method with minimal changes.
Examples
--------
>>> x = ivy.Container(a=ivy.array([[0, 1], [2,3]]), b=ivy.array([[4, 5]]))
>>> y = ivy.Container(a=ivy.array([[3, 2], [1,0]]), b=ivy.array([[1, 0]]))
>>> x.vstack([y])
{
a: ivy.array([[[0, 1],
[2, 3]],
[[3, 2],
[1, 0]]]),
b: ivy.array([[[4, 5]],
[[1, 0]]])
}
"""
new_xs = xs.cont_copy() if ivy.is_ivy_container(xs) else xs.copy()
new_xs.insert(0, self.cont_copy())
return self.static_vstack(
new_xs,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
out=out,
)
@staticmethod
def static_vstack(
xs: Union[
Tuple[Union[ivy.Array, ivy.NativeArray, ivy.Container]],
List[Union[ivy.Array, ivy.NativeArray, ivy.Container]],
],
/,
*,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
out: Optional[ivy.Container] = None,
) -> ivy.Container:
"""ivy.Container static method variant of ivy.stack. This method simply
wraps the function, and so the docstring for ivy.vstack also applies to
this method with minimal changes.
Examples
--------
With one :class:`ivy.Container` input:
>>> c = ivy.Container(a=[ivy.array([1,2,3]), ivy.array([0,0,0])],
b=ivy.arange(3))
>>> y = ivy.Container.static_vstack(c)
>>> print(y)
{
a: ivy.array([[1, 2, 3],
[0, 0, 0]]),
b: ivy.array([[0],
[1],
[2]])
}
"""
return ContainerBase.cont_multi_map_in_function(
"vstack",
xs,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
out=out,
)
def hstack(
self: ivy.Container,
/,
xs: Union[
Tuple[Union[ivy.Array, ivy.NativeArray, ivy.Container]],
List[Union[ivy.Array, ivy.NativeArray, ivy.Container]],
],
*,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
out: Optional[ivy.Container] = None,
) -> ivy.Container:
"""ivy.Container instance method variant of ivy.hstack. This method
simply wraps the function, and so the docstring for ivy.hstack also
applies to this method with minimal changes.
Examples
--------
>>> x = ivy.Container(a=ivy.array([[0, 1], [2,3]]), b=ivy.array([[4, 5]]))
>>> y = ivy.Container(a=ivy.array([[3, 2], [1,0]]), b=ivy.array([[1, 0]]))
>>> z = x.hstack([y])
>>> print(z)
{
a: ivy.array([[0, 1, 3, 2],
[2, 3, 1, 0]]),
b: ivy.array([[4, 5, 1, 0]])
}
"""
new_xs = xs.cont_copy() if ivy.is_ivy_container(xs) else xs.copy()
new_xs.insert(0, self.cont_copy())
return self.static_hstack(
new_xs,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
out=out,
)
@staticmethod
def static_hstack(
xs: Union[
Tuple[Union[ivy.Array, ivy.NativeArray, ivy.Container]],
List[Union[ivy.Array, ivy.NativeArray, ivy.Container]],
],
/,
*,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
out: Optional[ivy.Container] = None,
) -> ivy.Container:
"""ivy.Container static method variant of ivy.hstack. This method
simply wraps the function, and so the docstring for ivy.hstack also
applies to this method with minimal changes.
Examples
--------
With one :class:`ivy.Container` input:
>>> c = ivy.Container(a=[ivy.array([1,2,3]), ivy.array([0,0,0])])
>>> ivy.Container.static_hstack(c)
{
a: ivy.array([1, 2, 3, 0, 0, 0])
}
"""
return ContainerBase.cont_multi_map_in_function(
"hstack",
xs,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
out=out,
)
@staticmethod
def static_rot90(
m: Union[ivy.Container, ivy.Array, ivy.NativeArray],
/,
*,
copy: Union[bool, ivy.Container] = None,
k: Union[int, ivy.Container] = 1,
axes: Union[Tuple[int, int], ivy.Container] = (0, 1),
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
out: Optional[ivy.Container] = None,
) -> ivy.Container:
"""ivy.Container static method variant of ivy.rot90. This method simply
wraps the function, and so the docstring for ivy.rot90 also applies to
this method with minimal changes.
Parameters
----------
m
Input array of two or more dimensions.
k
Number of times the array is rotated by 90 degrees.
axes
The array is rotated in the plane defined by the axes. Axes must be
different.
key_chains
The key-chains to apply or not apply the method to. Default is None.
to_apply
If True, the method will be applied to key_chains, otherwise key_chains
will be skipped. Default is True.
prune_unapplied
Whether to prune key_chains for which the function was not applied.
Default is False.
map_sequences
Whether to also map method to sequences (lists, tuples). Default is False.
out
optional output container, for writing the result to. It must have a shape
that the inputs broadcast to.
Returns
-------
ret
Container with a rotated view of m.
Examples
--------
>>> m = ivy.Container(a=ivy.array([[1,2], [3,4]]),\
b=ivy.array([[1,2,3,4],\
[7,8,9,10]]))
>>> n = ivy.Container.static_rot90(m)
>>> print(n)
{
a: ivy.array([[2, 4],
[1, 3]]),
b: ivy.array([[4, 10],
[3, 9],
[2, 8],
[1, 7]])
}
"""
return ContainerBase.cont_multi_map_in_function(
"rot90",
m,
copy=copy,
k=k,
axes=axes,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
out=out,
)
def rot90(
self: Union[ivy.Container, ivy.Array, ivy.NativeArray],
/,
*,
copy: Union[bool, ivy.Container] = None,
k: Union[int, ivy.Container] = 1,
axes: Union[Tuple[int, int], ivy.Container] = (0, 1),
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
out: Optional[ivy.Container] = None,
) -> ivy.Container:
"""ivy.Container static method variant of ivy.rot90. This method simply
wraps the function, and so the docstring for ivy.rot90 also applies to
this method with minimal changes.
Parameters
----------
self
Input array of two or more dimensions.
k
Number of times the array is rotated by 90 degrees.
axes
The array is rotated in the plane defined by the axes. Axes must be
different.
key_chains
The key-chains to apply or not apply the method to. Default is None.
to_apply
If True, the method will be applied to key_chains, otherwise key_chains
will be skipped. Default is True.
prune_unapplied
Whether to prune key_chains for which the function was not applied.
Default is False.
map_sequences
Whether to also map method to sequences (lists, tuples). Default is False.
out
optional output container, for writing the result to. It must have a shape
that the inputs broadcast to.
Returns
-------
ret
Container with a rotated view of input array.
Examples
--------
>>> m = ivy.Container(a=ivy.array([[1,2], [3,4]]),
... b=ivy.array([[1,2,3,4],[7,8,9,10]]))
>>> n = m.rot90()
>>> print(n)
{
a: ivy.array([[2, 4],
[1, 3]]),
b: ivy.array([[4, 10],
[3, 9],
[2, 8],
[1, 7]])
}
"""
return self.static_rot90(
self,
copy=copy,
k=k,
axes=axes,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
out=out,
)
@staticmethod
def static_top_k(
x: Union[ivy.Container, ivy.Array, ivy.NativeArray],
k: Union[int, ivy.Container],
/,
*,
axis: Union[int, ivy.Container] = -1,
largest: Union[bool, ivy.Container] = True,
sorted: Union[bool, ivy.Container] = True,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
out: Optional[Union[Tuple[ivy.Container, ivy.Container], ivy.Container]] = None,
) -> Tuple[ivy.Container, ivy.Container]:
"""ivy.Container static method variant of ivy.top_k. This method simply
wraps the function, and so the docstring for ivy.top_k also applies to
this method with minimal changes.
Parameters
----------
x
The container to compute top_k for.
k
Number of top elements to return must not exceed the array size.
axis
The axis along which we must return the top elements default value is 1.
largest
If largest is set to False we return k smallest elements of the array.
sorted
If sorted is set to True we return the elements in sorted order.
key_chains
The key-chains to apply or not apply the method to. Default is ``None``.
to_apply
If True, the method will be applied to key_chains, otherwise key_chains
will be skipped. Default is ``True``.
prune_unapplied
Whether to prune key_chains for which the function was not applied.
Default is ``False``.
map_sequences
Whether to also map method to sequences (lists, tuples).
Default is ``False``
out:
Optional output tuple, for writing the result to. Must have two Container,
with a shape that the returned tuple broadcast to.
Returns
-------
ret
a container with indices and values.
Examples
--------
With :class:`ivy.Container` input:
>>> x = ivy.Container(a=ivy.array([-1, 2, -4]), b=ivy.array([4., 5., 0.]))
>>> y = ivy.Container.static_top_k(x, 2)
>>> print(y)
{
a: [
values = ivy.array([ 2, -1]),
indices = ivy.array([1, 0])
],
b: [
values = ivy.array([5., 4.]),
indices = ivy.array([1, 0])
]
}
"""
return ContainerBase.cont_multi_map_in_function(
"top_k",
x,
k,
axis=axis,
largest=largest,
sorted=sorted,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
out=out,
)
def top_k(
self: ivy.Container,
k: Union[int, ivy.Container],
/,
*,
axis: Union[int, ivy.Container] = -1,
largest: Union[bool, ivy.Container] = True,
sorted: Union[bool, ivy.Container] = True,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
out: Optional[Tuple[ivy.Container, ivy.Container]] = None,
) -> Tuple[ivy.Container, ivy.Container]:
"""ivy.Container instance method variant of ivy.top_k. This method
simply wraps the function, and so the docstring for ivy.top_k also
applies to this method with minimal changes.
Parameters
----------
self
The container to compute top_k for.
k
Number of top elements to return must not exceed the array size.
axis
The axis along which we must return the top elements default value is 1.
largest
If largest is set to False we return k smallest elements of the array.
sorted
If sorted is set to True we return the elements in sorted order.
key_chains
The key-chains to apply or not apply the method to. Default is ``None``.
to_apply
If True, the method will be applied to key_chains, otherwise key_chains
will be skipped. Default is ``True``.
prune_unapplied
Whether to prune key_chains for which the function was not applied.
Default is ``False``.
map_sequences
Whether to also map method to sequences (lists, tuples).
Default is ``False``
out:
Optional output tuple, for writing the result to. Must have two Container,
with a shape that the returned tuple broadcast to.
Returns
-------
ret
a container with indices and values.
Examples
--------
With :class:`ivy.Container` input:
>>> x = ivy.Container(a=ivy.array([-1, 2, -4]), b=ivy.array([4., 5., 0.]))
>>> y = x.top_k(2)
>>> print(y)
[{
a: ivy.array([2, -1]),
b: ivy.array([5., 4.])
}, {
a: ivy.array([1, 0]),
b: ivy.array([1, 0])
}]
"""
return self.static_top_k(
self,
k,
axis=axis,
largest=largest,
sorted=sorted,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
out=out,
)
@staticmethod
def static_fliplr(
m: Union[ivy.Array, ivy.NativeArray, ivy.Container],
/,
*,
copy: Optional[Union[bool, ivy.Container]] = None,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
out: Optional[ivy.Container] = None,
) -> ivy.Container:
"""ivy.Container static method variant of ivy.fliplr. This method
simply wraps the function, and so the docstring for ivy.fliplr also
applies to this method with minimal changes.
Parameters
----------
m
the container with arrays to be flipped. Arrays must be at least 2-D.
copy
boolean indicating whether or not to copy the input array.
If True, the function must always copy.
If False, the function must never copy and must
raise a ValueError in case a copy would be necessary.
If None, the function must reuse existing memory buffer if possible
and copy otherwise. Default: ``None``.
key_chains
The key-chains to apply or not apply the method to. Default is ``None``.
to_apply
If True, the method will be applied to key_chains, otherwise key_chains
will be skipped. Default is ``True``.
prune_unapplied
Whether to prune key_chains for which the function was not applied.
Default is ``False``.
map_sequences
Whether to also map method to sequences (lists, tuples).
Default is ``False``
out
optional output container, for writing the result to.
Returns
-------
ret
container including arrays corresponding to the input container's array
with elements order reversed along axis 1.
Examples
--------
With one :class:`ivy.Container` input:
>>> m = ivy.Container(a=ivy.diag([1, 2, 3]),\
... b=ivy.array([[1, 2, 3],[4, 5, 6]]))
>>> ivy.Container.static_fliplr(m)
{
a: ivy.array([[0, 0, 1],
[0, 2, 0],
[3, 0, 0]]),
b: ivy.array([[3, 2, 1],
[6, 5, 4]])
}
"""
return ContainerBase.cont_multi_map_in_function(
"fliplr",
m,
copy=copy,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
out=out,
)
def fliplr(
self: ivy.Container,
/,
*,
copy: Optional[Union[bool, ivy.Container]] = None,
out: Optional[ivy.Container] = None,
) -> ivy.Container:
"""ivy.Container instance method variant of ivy.fliplr. This method
simply wraps the function, and so the docstring for ivy.fliplr also
applies to this method with minimal changes.
Parameters
----------
self
the container with arrays to be flipped. Arrays must be at least 2-D.
copy
boolean indicating whether or not to copy the input array.
If True, the function must always copy.
If False, the function must never copy and must
raise a ValueError in case a copy would be necessary.
If None, the function must reuse existing memory buffer if possible
and copy otherwise. Default: ``None``.
out
optional output container, for writing the result to.
Returns
-------
ret
container including arrays corresponding to the input container's array
with elements order reversed along axis 1.
Examples
--------
With one :class:`ivy.Container` input:
>>> m = ivy.Container(a=ivy.diag([1, 2, 3]),\
... b=ivy.array([[1, 2, 3],[4, 5, 6]]))
>>> m.fliplr()
{
a: ivy.array([[0, 0, 1],
[0, 2, 0],
[3, 0, 0]]),
b: ivy.array([[3, 2, 1],
[6, 5, 4]])
}
"""
return self.static_fliplr(self, copy=copy, out=out)
@staticmethod
def static_i0(
x: Union[ivy.Array, ivy.NativeArray, ivy.Container],
/,
*,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
out: Optional[ivy.Container] = None,
) -> ivy.Container:
"""ivy.Container static method variant of ivy.i0. This method simply
wraps the function, and so the docstring for ivy.i0 also applies to
this method with minimal changes.
Parameters
----------
x
the container with array inputs.
out
optional output container, for writing the result to.
Returns
-------
ret
container including arrays with the modified Bessel
function evaluated at each of the elements of x.
Examples
--------
With one :class:`ivy.Container` input:
>>> x = ivy.Container(a=ivy.array([1, 2, 3]), b=ivy.array(4))
>>> ivy.Container.static_i0(x)
{
a: ivy.array([1.26606588, 2.2795853 , 4.88079259])
b: ivy.array(11.30192195)
}
"""
return ContainerBase.cont_multi_map_in_function(
"i0",
x,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
out=out,
)
def i0(
self: ivy.Container,
/,
*,
out: Optional[ivy.Container] = None,
) -> ivy.Container:
"""ivy.Container instance method variant of ivy.i0. This method simply
wraps the function, and so the docstring for ivy.i0 also applies to
this method with minimal changes.
Parameters
----------
self
the container with array inputs.
out
optional output container, for writing the result to.
Returns
-------
ret
container including arrays with the modified Bessel
function evaluated at each of the elements of x.
Examples
--------
With one :class:`ivy.Container` input:
>>> x = ivy.Container(a=ivy.array([1, 2, 3]), b=ivy.array(4))
>>> x.i0()
{
a: ivy.array([1.26606588, 2.2795853 , 4.88079259])
b: ivy.array(11.30192195)
}
"""
return self.static_i0(self, out=out)
@staticmethod
def static_flatten(
x: Union[ivy.Array, ivy.NativeArray, ivy.Container],
/,
*,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
copy: Optional[Union[bool, ivy.Container]] = None,
start_dim: Union[int, ivy.Container] = 0,
end_dim: Union[int, ivy.Container] = -1,
order: Union[str, ivy.Container] = "C",
out: Optional[ivy.Container] = None,
) -> ivy.Container:
"""ivy.Container static method variant of ivy.flatten. This method
simply wraps the function, and so the docstring for ivy.flatten also
applies to this method with minimal changes.
Parameters
----------
x
input container to flatten at leaves.
copy
boolean indicating whether or not to copy the input array.
If True, the function must always copy.
If False, the function must never copy and must
raise a ValueError in case a copy would be necessary.
If None, the function must reuse existing memory buffer if possible
and copy otherwise. Default: ``None``.
start_dim
first dim to flatten. If not set, defaults to 0.
end_dim
last dim to flatten. If not set, defaults to -1.
order
Read the elements of the input container using this index order,
and place the elements into the reshaped array using this index order.
‘C’ means to read / write the elements using C-like index order,
with the last axis index changing fastest, back to the first axis index
changing slowest.
‘F’ means to read / write the elements using Fortran-like index order, with
the first index changing fastest, and the last index changing slowest.
Note that the ‘C’ and ‘F’ options take no account of the memory layout
of the underlying array, and only refer to the order of indexing.
Default order is 'C'
Returns
-------
ret
Container with arrays flattened at leaves.
Examples
--------
With one :class:`ivy.Container` input:
>>> x = ivy.Container(a=ivy.array([[[1, 2], [3, 4]], [[5, 6], [7, 8]]]),
... b=ivy.array([[[9, 10], [11, 12]], [[13, 14], [15, 16]]]))
>>> ivy.flatten(x)
[{
a: ivy.array([1, 2, 3, 4, 5, 6, 7, 8])
b: ivy.array([9, 10, 11, 12, 13, 14, 15, 16])
}]
>>> x = ivy.Container(a=ivy.array([[[1, 2], [3, 4]], [[5, 6], [7, 8]]]),
... b=ivy.array([[[9, 10], [11, 12]], [[13, 14], [15, 16]]]))
>>> ivy.flatten(x, order="F")
[{
a: ivy.array([1, 5, 3, 7, 2, 6, 4, 8])
b: ivy.array([9, 13, 11, 15, 10, 14, 12, 16])
}]
"""
return ContainerBase.cont_multi_map_in_function(
"flatten",
x,
copy=copy,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
start_dim=start_dim,
end_dim=end_dim,
order=order,
out=out,
)
def flatten(
self: ivy.Container,
*,
copy: Optional[Union[bool, ivy.Container]] = None,
start_dim: Union[int, ivy.Container] = 0,
end_dim: Union[int, ivy.Container] = -1,
order: Union[str, ivy.Container] = "C",
out: Optional[ivy.Container] = None,
) -> ivy.Container:
"""ivy.Container instance method variant of ivy.flatten. This method
simply wraps the function, and so the docstring for ivy.flatten also
applies to this method with minimal changes.
Parameters
----------
self
input container to flatten at leaves.
copy
boolean indicating whether or not to copy the input array.
If True, the function must always copy.
If False, the function must never copy and must
raise a ValueError in case a copy would be necessary.
If None, the function must reuse existing memory buffer if possible
and copy otherwise. Default: ``None``.
start_dim
first dim to flatten. If not set, defaults to 0.
end_dim
last dim to flatten. If not set, defaults to -1.
order
Read the elements of the input container using this index order,
and place the elements into the reshaped array using this index order.
‘C’ means to read / write the elements using C-like index order,
with the last axis index changing fastest, back to the first axis index
changing slowest.
‘F’ means to read / write the elements using Fortran-like index order, with
the first index changing fastest, and the last index changing slowest.
Note that the ‘C’ and ‘F’ options take no account of the memory layout
of the underlying array, and only refer to the order of indexing.
Default order is 'C'
Returns
-------
ret
Container with arrays flattened at leaves.
Examples
--------
With one :class:`ivy.Container` input:
>>> x = ivy.Container(a=ivy.array([[[1, 2], [3, 4]], [[5, 6], [7, 8]]]),
... b=ivy.array([[[9, 10], [11, 12]], [[13, 14], [15, 16]]]))
>>> x.flatten()
[{
a: ivy.array([1, 2, 3, 4, 5, 6, 7, 8])
b: ivy.array([9, 10, 11, 12, 13, 14, 15, 16])
}]
>>> x = ivy.Container(a=ivy.array([[[1, 2], [3, 4]], [[5, 6], [7, 8]]]),
... b=ivy.array([[[9, 10], [11, 12]], [[13, 14], [15, 16]]]))
>>> x.flatten(order="F")
[{
a: ivy.array([1, 5, 3, 7, 2, 6, 4, 8])
b: ivy.array([9, 13, 11, 15, 10, 14, 12, 16])
}]
"""
return self.static_flatten(
self, copy=copy, start_dim=start_dim, end_dim=end_dim, out=out, order=order
)
@staticmethod
def static_pad(
input: ivy.Container,
pad_width: Union[Iterable[Tuple[int]], int, ivy.Container],
/,
*,
mode: Union[
Literal[
"constant",
"dilated",
"edge",
"linear_ramp",
"maximum",
"mean",
"median",
"minimum",
"reflect",
"symmetric",
"wrap",
"empty",
],
Callable,
ivy.Container,
] = "constant",
stat_length: Union[Iterable[Tuple[int]], int, ivy.Container] = 1,
constant_values: Union[Iterable[Tuple[Number]], Number, ivy.Container] = 0,
end_values: Union[Iterable[Tuple[Number]], Number, ivy.Container] = 0,
reflect_type: Union[Literal["even", "odd"], ivy.Container] = "even",
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
out: Optional[ivy.Container] = None,
**kwargs: Optional[Union[Any, ivy.Container]],
) -> ivy.Container:
"""ivy.Container static method variant of ivy.pad.
This method simply wraps the function, and so the docstring for
ivy.pad also applies to this method with minimal changes.
"""
return ContainerBase.cont_multi_map_in_function(
"pad",
input,
pad_width,
mode=mode,
stat_length=stat_length,
constant_values=constant_values,
end_values=end_values,
reflect_type=reflect_type,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
out=out,
**kwargs,
)
def pad(
self: ivy.Container,
pad_width: Union[Iterable[Tuple[int]], int, ivy.Container],
/,
*,
mode: Union[
Literal[
"constant",
"dilated",
"edge",
"linear_ramp",
"maximum",
"mean",
"median",
"minimum",
"reflect",
"symmetric",
"wrap",
"empty",
],
Callable,
ivy.Container,
] = "constant",
stat_length: Union[Iterable[Tuple[int]], int, ivy.Container] = 1,
constant_values: Union[Iterable[Tuple[Number]], Number, ivy.Container] = 0,
end_values: Union[Iterable[Tuple[Number]], Number, ivy.Container] = 0,
reflect_type: Union[Literal["even", "odd"], ivy.Container] = "even",
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
out: Optional[ivy.Container] = None,
**kwargs: Optional[Union[Any, ivy.Container]],
) -> ivy.Container:
"""ivy.Container instance method variant of ivy.pad.
This method simply wraps the function, and so the docstring for
ivy.pad also applies to this method with minimal changes.
"""
return self.static_pad(
self,
pad_width,
mode=mode,
stat_length=stat_length,
constant_values=constant_values,
end_values=end_values,
reflect_type=reflect_type,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
out=out,
**kwargs,
)
@staticmethod
def static_vsplit(
ary: Union[ivy.Array, ivy.NativeArray, ivy.Container],
indices_or_sections: Union[
int, Sequence[int], ivy.Array, ivy.NativeArray, ivy.Container
],
/,
*,
copy: Optional[Union[bool, ivy.Container]] = None,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
) -> List[ivy.Container]:
"""ivy.Container static method variant of ivy.vsplit. This method
simply wraps the function, and so the docstring for ivy.vsplit also
applies to this method with minimal changes.
Parameters
----------
ary
the container with array inputs.
copy
boolean indicating whether or not to copy the input array.
If True, the function must always copy.
If False, the function must never copy and must
raise a ValueError in case a copy would be necessary.
If None, the function must reuse existing memory buffer if possible
and copy otherwise. Default: ``None``.
indices_or_sections
If indices_or_sections is an integer n, the array is split into n
equal sections, provided that n must be a divisor of the split axis.
If indices_or_sections is a sequence of ints or 1-D array,
then input is split at each of the indices.
key_chains
The key-chains to apply or not apply the method to. Default is None.
to_apply
If True, the method will be applied to key_chains, otherwise key_chains
will be skipped. Default is True.
prune_unapplied
Whether to prune key_chains for which the function was not applied.
Default is False.
map_sequences
Whether to also map method to sequences (lists, tuples). Default is False.
Returns
-------
ret
list of containers holding arrays split vertically from the input
Examples
--------
>>> ary = ivy.Container(
a = ivy.array(
[[[0., 1.],
[2., 3.]],
[[4., 5.],
[6., 7.]]]
),
b=ivy.array(
[[ 0., 1., 2., 3.],
[ 4., 5., 6., 7.],
[ 8., 9., 10., 11.],
[12., 13., 14., 15.]]
)
)
>>> ivy.Container.static_vsplit(ary, 2)
[{
a: ivy.array([[[0., 1.],
[2., 3.]]]),
b: ivy.array([[0., 1., 2., 3.],
[4., 5., 6., 7.]])
}, {
a: ivy.array([[[4., 5.],
[6., 7.]]]),
b: ivy.array([[8., 9., 10., 11.],
[12., 13., 14., 15.]])
}]
"""
return ContainerBase.cont_multi_map_in_function(
"vsplit",
ary,
indices_or_sections,
copy=copy,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
)
def vsplit(
self: ivy.Container,
indices_or_sections: Union[
int, Sequence[int], ivy.Array, ivy.NativeArray, ivy.Container
],
/,
*,
copy: Optional[Union[bool, ivy.Container]] = None,
) -> List[ivy.Container]:
"""ivy.Container instance method variant of ivy.vsplit. This method
simply wraps the function, and so the docstring for ivy.vsplit also
applies to this method with minimal changes.
Parameters
----------
self
the container with array inputs.
copy
boolean indicating whether or not to copy the input array.
If True, the function must always copy.
If False, the function must never copy and must
raise a ValueError in case a copy would be necessary.
If None, the function must reuse existing memory buffer if possible
and copy otherwise. Default: ``None``.
indices_or_sections
If indices_or_sections is an integer n, the array is split into n
equal sections, provided that n must be a divisor of the split axis.
If indices_or_sections is a sequence of ints or 1-D array,
then input is split at each of the indices.
Returns
-------
ret
list of containers holding arrays split vertically from the input
Examples
--------
>>> ary = ivy.Container(
a = ivy.array(
[[[0., 1.],
[2., 3.]],
[[4., 5.],
[6., 7.]]]
),
b=ivy.array(
[[ 0., 1., 2., 3.],
[ 4., 5., 6., 7.],
[ 8., 9., 10., 11.],
[12., 13., 14., 15.]]
)
)
>>> ary.vsplit(2)
[{
a: ivy.array([[[0., 1.],
[2., 3.]]]),
b: ivy.array([[0., 1., 2., 3.],
[4., 5., 6., 7.]])
}, {
a: ivy.array([[[4., 5.],
[6., 7.]]]),
b: ivy.array([[8., 9., 10., 11.],
[12., 13., 14., 15.]])
}]
"""
return self.static_vsplit(self, indices_or_sections, copy=copy)
@staticmethod
def static_dsplit(
ary: Union[ivy.Array, ivy.NativeArray, ivy.Container],
indices_or_sections: Union[
int, Sequence[int], ivy.Array, ivy.NativeArray, ivy.Container
],
/,
*,
copy: Optional[Union[bool, ivy.Container]] = None,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
) -> List[ivy.Container]:
"""ivy.Container static method variant of ivy.dsplit. This method
simply wraps the function, and so the docstring for ivy.dsplit also
applies to this method with minimal changes.
Parameters
----------
ary
the container with array inputs.
indices_or_sections
If indices_or_sections is an integer n, the array is split into n
equal sections, provided that n must be a divisor of the split axis.
If indices_or_sections is a sequence of ints or 1-D array,
then input is split at each of the indices.
copy
boolean indicating whether or not to copy the input array.
If True, the function must always copy.
If False, the function must never copy and must
raise a ValueError in case a copy would be necessary.
If None, the function must reuse existing memory buffer if possible
and copy otherwise. Default: ``None``.
key_chains
The key-chains to apply or not apply the method to. Default is None.
to_apply
If True, the method will be applied to key_chains, otherwise key_chains
will be skipped. Default is True.
prune_unapplied
Whether to prune key_chains for which the function was not applied.
Default is False.
map_sequences
Whether to also map method to sequences (lists, tuples). Default is False.
Returns
-------
ret
list of containers holding arrays split from the input at the 3rd axis
Examples
--------
>>> ary = ivy.Container(
a = ivy.array(
[[[0., 1.],
[2., 3.]],
[[4., 5.],
[6., 7.]]]
),
b=ivy.array(
[[[ 0., 1., 2., 3.],
[ 4., 5., 6., 7.],
[ 8., 9., 10., 11.],
[12., 13., 14., 15.]]]
)
)
>>> ivy.Container.static_dsplit(ary, 2)
[{
a: ivy.array([[[0.], [2.]],
[[4.], [6.]]]),
b: ivy.array([[[0., 1.], [4., 5.], [8., 9.], [12., 13.]]])
}, {
a: ivy.array([[[1.], [3.]],
[[5.], [7.]]]),
b: ivy.array([[[2., 3.], [6., 7.], [10., 11.], [14., 15.]]])
}]
"""
return ContainerBase.cont_multi_map_in_function(
"dsplit",
ary,
indices_or_sections,
copy=copy,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
)
def dsplit(
self: ivy.Container,
indices_or_sections: Union[
int, Sequence[int], ivy.Array, ivy.NativeArray, ivy.Container
],
/,
*,
copy: Optional[Union[bool, ivy.Container]] = None,
) -> List[ivy.Container]:
"""ivy.Container instance method variant of ivy.dsplit. This method
simply wraps the function, and so the docstring for ivy.dsplit also
applies to this method with minimal changes.
Parameters
----------
self
the container with array inputs.
indices_or_sections
If indices_or_sections is an integer n, the array is split into n
equal sections, provided that n must be a divisor of the split axis.
If indices_or_sections is a sequence of ints or 1-D array,
then input is split at each of the indices.
copy
boolean indicating whether or not to copy the input array.
If True, the function must always copy.
If False, the function must never copy and must
raise a ValueError in case a copy would be necessary.
If None, the function must reuse existing memory buffer if possible
and copy otherwise. Default: ``None``.
Returns
-------
ret
list of containers holding arrays split from the input at the 3rd axis
Examples
--------
>>> ary = ivy.Container(
a = ivy.array(
[[[0., 1.],
[2., 3.]],
[[4., 5.],
[6., 7.]]]
),
b=ivy.array(
[[[ 0., 1., 2., 3.],
[ 4., 5., 6., 7.],
[ 8., 9., 10., 11.],
[12., 13., 14., 15.]]]
)
)
>>> ary.dsplit(2)
[{
a: ivy.array([[[0.], [2.]],
[[4.], [6.]]]),
b: ivy.array([[[0., 1.], [4., 5.], [8., 9.], [12., 13.]]])
}, {
a: ivy.array([[[1.], [3.]],
[[5.], [7.]]]),
b: ivy.array([[[2., 3.], [6., 7.], [10., 11.], [14., 15.]]])
}]
"""
return self.static_dsplit(self, indices_or_sections, copy=copy)
@staticmethod
def static_atleast_1d(
*arys: Union[ivy.Array, ivy.NativeArray, ivy.Container],
copy: Optional[Union[bool, ivy.Container]] = None,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
) -> List[ivy.Container]:
"""ivy.Container static method variant of ivy.atleast_1d. This method
simply wraps the function, and so the docstring for ivy.atleast_1d also
applies to this method with minimal changes.
Parameters
----------
arys
one or more container with array inputs.
copy
boolean indicating whether or not to copy the input array.
If True, the function must always copy.
If False, the function must never copy and must
raise a ValueError in case a copy would be necessary.
If None, the function must reuse existing memory buffer if possible
and copy otherwise. Default: ``None``.
key_chains
The keychains to apply or not apply the method to. Default is ``None``.
to_apply
If True, the method will be applied to key_chains, otherwise key_chains
will be skipped. Default is ``True``.
prune_unapplied
Whether to prune key_chains for which the function was not applied.
Default is ``False``.
map_sequences
Whether to also map method to sequences (lists, tuples).
Default is ``False``.
Returns
-------
ret
container or list of container where each elements within container is
at least 1d. Copies are made only if necessary.
Examples
--------
>>> ary = ivy.Container(a=ivy.array(1), b=ivy.array([3,4,5]),\
c=ivy.array([[3]]))
>>> ivy.Container.static_atleast_1d(ary)
{
a: ivy.array([1]),
b: ivy.array([3, 4, 5]),
c: ivy.array([[3]]),
}
"""
return ContainerBase.cont_multi_map_in_function(
"atleast_1d",
*arys,
copy=copy,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
)
def atleast_1d(
self: Union[ivy.Container, ivy.Array, ivy.NativeArray],
*arys: Union[ivy.Container, ivy.Array, ivy.NativeArray, bool, Number],
copy: Optional[Union[bool, ivy.Container]] = None,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
) -> List[ivy.Container]:
"""ivy.Container instance method variant of ivy.atleast_1d. This method
simply wraps the function, and so the docstring for ivy.atleast_1d also
applies to this method with minimal changes.
Parameters
----------
self
the container with array inputs.
arys
one or more container with array inputs.
copy
boolean indicating whether or not to copy the input array.
If True, the function must always copy.
If False, the function must never copy and must
raise a ValueError in case a copy would be necessary.
If None, the function must reuse existing memory buffer if possible
and copy otherwise. Default: ``None``.
key_chains
The keychains to apply or not apply the method to. Default is ``None``.
to_apply
If True, the method will be applied to key_chains, otherwise key_chains
will be skipped. Default is ``True``.
prune_unapplied
Whether to prune key_chains for which the function was not applied.
Default is ``False``.
map_sequences
Whether to also map method to sequences (lists, tuples).
Default is ``False``.
Returns
-------
ret
container or list of container where each elements within container is
at least 1d. Copies are made only if necessary.
Examples
--------
>>> ary1 = ivy.Container(a=ivy.array(1), b=ivy.array([3,4]),\
c=ivy.array([[5]]))
>>> ary2 = ivy.Container(a=ivy.array(9), b=ivy.array(2),\
c=ivy.array(3))
>>> ary1.atleast_1d(ary2)
[{
a: ivy.array([1]),
b: ivy.array([3, 4]),
c: ivy.array([[5]])
}, {
a: ivy.array([9]),
b: ivy.array([2]),
c: ivy.array([3])
}]
"""
return self.static_atleast_1d(
self,
*arys,
copy=copy,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
)
def dstack(
self: ivy.Container,
/,
xs: Union[
Tuple[Union[ivy.Array, ivy.NativeArray, ivy.Container]],
List[Union[ivy.Array, ivy.NativeArray, ivy.Container]],
],
*,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
out: Optional[ivy.Container] = None,
) -> ivy.Container:
"""ivy.Container instance method variant of ivy.stack. This method
simply wraps the function, and so the docstring for ivy.stack also
applies to this method with minimal changes.
Examples
--------
>>> x = ivy.Container(a=ivy.array([[0, 1], [2,3]]), b=ivy.array([[4, 5]]))
>>> y = ivy.Container(a=ivy.array([[3, 2], [1,0]]), b=ivy.array([[1, 0]]))
>>> x.dstack([y])
{
a: ivy.array([[[0, 3],
[1, 2]],
[[2, 1],
[3, 0]]]),
b: ivy.array([[[4, 1]],
[[5, 0]]])
}
"""
new_xs = xs.cont_copy() if ivy.is_ivy_container(xs) else xs.copy()
new_xs.insert(0, self.cont_copy())
return self.static_dstack(
new_xs,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
out=out,
)
@staticmethod
def static_dstack(
xs: Union[
Tuple[Union[ivy.Array, ivy.NativeArray, ivy.Container]],
List[Union[ivy.Array, ivy.NativeArray, ivy.Container]],
],
/,
*,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
out: Optional[ivy.Container] = None,
) -> ivy.Container:
"""ivy.Container static method variant of ivy.stack. This method simply
wraps the function, and so the docstring for ivy.dstack also applies to
this method with minimal changes.
Examples
--------
With one :class:`ivy.Container` input:
>>> c = ivy.Container(a=[ivy.array([1,2,3]), ivy.array([0,0,0])],
b=ivy.arange(3))
>>> ivy.Container.static_dstack(c)
{
a: ivy.array([[1, 0],
[2, 0]
[3,0]]),
b: ivy.array([[0, 1, 2])
}
"""
return ContainerBase.cont_multi_map_in_function(
"dstack",
xs,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
out=out,
)
@staticmethod
def static_atleast_2d(
*arys: Union[ivy.Array, ivy.NativeArray, ivy.Container],
copy: Optional[Union[bool, ivy.Container]] = None,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
) -> List[ivy.Container]:
"""ivy.Container static method variant of ivy.atleast_2d. This method
simply wraps the function, and so the docstring for ivy.atleast_2d also
applies to this method with minimal changes.
Parameters
----------
arys
one or more container with array inputs.
copy
boolean indicating whether or not to copy the input array.
If True, the function must always copy.
If False, the function must never copy and must
raise a ValueError in case a copy would be necessary.
If None, the function must reuse existing memory buffer if possible
and copy otherwise. Default: ``None``.
key_chains
The keychains to apply or not apply the method to. Default is ``None``.
to_apply
If True, the method will be applied to key_chains, otherwise key_chains
will be skipped. Default is ``True``.
prune_unapplied
Whether to prune key_chains for which the function was not applied.
Default is ``False``.
map_sequences
Whether to also map method to sequences (lists, tuples).
Default is ``False``.
Returns
-------
ret
container or list of container where each elements within container is
at least 2D. Copies are made only if necessary.
Examples
--------
>>> ary = ivy.Container(a=ivy.array(1), b=ivy.array([3,4,5]),\
c=ivy.array([[3]]))
>>> ivy.Container.static_atleast_2d(ary)
{
a: ivy.array([[1]]),
b: ivy.array([[3, 4, 5]]),
c: ivy.array([[3]])
}
"""
return ContainerBase.cont_multi_map_in_function(
"atleast_2d",
*arys,
copy=copy,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
)
def atleast_2d(
self: Union[ivy.Container, ivy.Array, ivy.NativeArray],
*arys: Union[ivy.Container, ivy.Array, ivy.NativeArray],
copy: Optional[Union[bool, ivy.Container]] = None,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
) -> List[ivy.Container]:
"""ivy.Container instance method variant of ivy.atleast_2d. This method
simply wraps the function, and so the docstring for ivy.atleast_2d also
applies to this method with minimal changes.
Parameters
----------
self
container with array inputs.
arys
one or more container with array inputs.
copy
boolean indicating whether or not to copy the input array.
If True, the function must always copy.
If False, the function must never copy and must
raise a ValueError in case a copy would be necessary.
If None, the function must reuse existing memory buffer if possible
and copy otherwise. Default: ``None``.
key_chains
The keychains to apply or not apply the method to. Default is ``None``.
to_apply
If True, the method will be applied to key_chains, otherwise key_chains
will be skipped. Default is ``True``.
prune_unapplied
Whether to prune key_chains for which the function was not applied.
Default is ``False``.
map_sequences
Whether to also map method to sequences (lists, tuples).
Default is ``False``.
Returns
-------
ret
container or list of container where each elements within container is
at least 2D. Copies are made only if necessary.
Examples
--------
>>> ary1 = ivy.Container(a=ivy.array(1), b=ivy.array([3,4]),\
c=ivy.array([[5]]))
>>> ary2 = ivy.Container(a=ivy.array(9), b=ivy.array(2),\
c=ivy.array(3))
>>> ary1.atleast_2d(ary2)
[{
a: ivy.array([[1]]),
b: ivy.array([[3, 4]]),
c: ivy.array([[5]])
}, {
a: ivy.array([[9]]),
b: ivy.array([[2]]),
c: ivy.array([[3]])
}]
"""
return self.static_atleast_2d(
self,
*arys,
copy=copy,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
)
@staticmethod
def static_atleast_3d(
*arys: Union[ivy.Array, ivy.NativeArray, ivy.Container],
copy: Optional[Union[bool, ivy.Container]] = None,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
) -> List[ivy.Container]:
"""ivy.Container static method variant of ivy.atleast_3d. This method
simply wraps the function, and so the docstring for ivy.atleast_3d also
applies to this method with minimal changes.
Parameters
----------
arys
one or more container with array inputs.
copy
boolean indicating whether or not to copy the input array.
If True, the function must always copy.
If False, the function must never copy and must
raise a ValueError in case a copy would be necessary.
If None, the function must reuse existing memory buffer if possible
and copy otherwise. Default: ``None``.
key_chains
The keychains to apply or not apply the method to. Default is ``None``.
to_apply
If True, the method will be applied to key_chains, otherwise key_chains
will be skipped. Default is ``True``.
prune_unapplied
Whether to prune key_chains for which the function was not applied.
Default is ``False``.
map_sequences
Whether to also map method to sequences (lists, tuples).
Default is ``False``.
Returns
-------
ret
container or list of container where each elements within container is
at least 3D. Copies are made only if necessary. For example, a 1-D array
of shape (N,) becomes a view of shape (1, N, 1), and a 2-D array of shape
(M, N) becomes a view of shape (M, N, 1).
Examples
--------
>>> ary = ivy.Container(a=ivy.array(1), b=ivy.array([3,4,5]),\
c=ivy.array([[3]]))
>>> ivy.Container.static_atleast_3d(ary)
{
a: ivy.array([[[1]]]),
b: ivy.array([[[3],
[4],
[5]]]),
c: ivy.array([[[3]]])
}
"""
return ContainerBase.cont_multi_map_in_function(
"atleast_3d",
*arys,
copy=copy,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
)
def atleast_3d(
self: Union[ivy.Container, ivy.Array, ivy.NativeArray],
*arys: Union[ivy.Container, ivy.Array, ivy.NativeArray, bool, Number],
copy: Optional[Union[bool, ivy.Container]] = None,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
) -> List[ivy.Container]:
"""ivy.Container instance method variant of ivy.atleast_3d. This method
simply wraps the function, and so the docstring for ivy.atleast_3d also
applies to this method with minimal changes.
Parameters
----------
self
container with array inputs.
arys
one or more container with array inputs.
key_chains
The keychains to apply or not apply the method to. Default is ``None``.
to_apply
If True, the method will be applied to key_chains, otherwise key_chains
will be skipped. Default is ``True``.
prune_unapplied
Whether to prune key_chains for which the function was not applied.
Default is ``False``.
map_sequences
Whether to also map method to sequences (lists, tuples).
Default is ``False``.
Returns
-------
ret
container or list of container where each elements within container is
at least 3D. Copies are made only if necessary. For example, a 1-D array
of shape (N,) becomes a view of shape (1, N, 1), and a 2-D array of shape
(M, N) becomes a view of shape (M, N, 1).
Examples
--------
>>> ary1 = ivy.Container(a=ivy.array(1), b=ivy.array([3,4]),\
c=ivy.array([[5]]))
>>> ary2 = ivy.Container(a=ivy.array(9), b=ivy.array(2),\
c=ivy.array(3))
>>> ary1.atleast_3d(ary2)
[{
a: ivy.array([[[1]]]),
b: ivy.array([[[3],
[4]]]),
c: ivy.array([[[5]]])
}, {
a: ivy.array([[[9]]]),
b: ivy.array([[[2]]]),
c: ivy.array([[[3]]])
}]
"""
return self.static_atleast_3d(
self,
*arys,
copy=copy,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
)
@staticmethod
def static_take_along_axis(
arr: Union[ivy.Array, ivy.NativeArray, ivy.Container],
indices: Union[ivy.Array, ivy.NativeArray, ivy.Container],
axis: Union[int, ivy.Container],
mode: Union[str, ivy.Container] = "fill",
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
out: Optional[ivy.Container] = None,
) -> ivy.Container:
"""ivy.Container static method variant of ivy.take_along_axis. This
method simply wraps the function, and so the docstring for
ivy.take_along_axis also applies to this method with minimal changes.
Parameters
----------
arr
container with array inputs.
indices
container with indices of the values to extract.
axis
The axis over which to select values. If axis is None, then arr and indices
must be 1-D sequences of the same length.
mode
One of: 'clip', 'fill', 'drop'. Parameter controlling how out-of-bounds
indices will be handled.
key_chains
The keychains to apply or not apply the method to. Default is ``None``.
to_apply
If True, the method will be applied to key_chains, otherwise key_chains
will be skipped. Default is ``True``.
prune_unapplied
Whether to prune key_chains for which the function was not applied.
Default is ``False``.
map_sequences
Whether to also map method to sequences (lists, tuples).
Default is ``False``.
out
optional output container, for writing the result to.
Returns
-------
ret
a container with arrays of the same shape as those in indices.
Examples
--------
>>> arr = ivy.Container(a=ivy.array([[1, 2], [3, 4]]),\
b=ivy.array([[5, 6], [7, 8]]))
>>> indices = ivy.Container(a=ivy.array([[0, 0], [1, 1]]),\
b=ivy.array([[1, 0], [1, 0]]))
>>> ivy.Container.static_take_along_axis(arr, indices, axis=1)
{
a: ivy.array([[1, 1],
[4, 4]]),
b: ivy.array([[6, 5],
[8, 7]])
}
"""
return ContainerBase.cont_multi_map_in_function(
"take_along_axis",
arr,
indices,
axis,
mode=mode,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
out=out,
)
def take_along_axis(
self: Union[ivy.Container, ivy.Array, ivy.NativeArray],
indices: Union[ivy.Container, ivy.Array, ivy.NativeArray],
axis: Union[int, ivy.Container],
mode: Union[str, ivy.Container] = "fill",
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
out: Optional[ivy.Container] = None,
) -> ivy.Container:
"""ivy.Container instance method variant of ivy.take_along_axis. This
method simply wraps the function, and so the docstring for
ivy.take_along_axis also applies to this method with minimal changes.
Parameters
----------
self
container with array inputs.
indices
container with indices of the values to extract.
axis
The axis over which to select values. If axis is None, then arr and indices
must be 1-D sequences of the same length.
mode
One of: 'clip', 'fill', 'drop'. Parameter controlling how out-of-bounds
indices will be handled.
key_chains
The keychains to apply or not apply the method to. Default is ``None``.
to_apply
If True, the method will be applied to key_chains, otherwise key_chains
will be skipped. Default is ``True``.
prune_unapplied
Whether to prune key_chains for which the function was not applied.
Default is ``False``.
map_sequences
Whether to also map method to sequences (lists, tuples).
Default is ``False``.
out
optional output container, for writing the result to.
Returns
-------
ret
a container with arrays of the same shape as those in indices.
Examples
--------
>>> arr = ivy.Container(a=ivy.array([[1, 2], [3, 4]]),\
b=ivy.array([[5, 6], [7, 8]]))
>>> indices = ivy.Container(a=ivy.array([[0, 0], [1, 1]]),\
b=ivy.array([[1, 0], [1, 0]]))
>>> arr.take_along_axis(indices, axis=1)
[{
a: ivy.array([[1, 1],
[4, 4]]),
b: ivy.array([[6, 5],
[8, 7]])
}]
"""
return self.static_take_along_axis(
self,
indices,
axis,
mode=mode,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
out=out,
)
@staticmethod
def static_hsplit(
ary: Union[ivy.Array, ivy.NativeArray, ivy.Container],
indices_or_sections: Union[
int, Sequence[int], ivy.Array, ivy.NativeArray, ivy.Container
],
/,
*,
copy: Optional[Union[bool, ivy.Container]] = None,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
) -> List[ivy.Container]:
"""ivy.Container static method variant of ivy.hsplit. This method
simply wraps the function, and so the docstring for ivy.hsplit also
applies to this method with minimal changes.
Parameters
----------
ary
the container with array inputs.
indices_or_sections
If indices_or_sections is an integer n, the array is split into n
equal sections, provided that n must be a divisor of the split axis.
If indices_or_sections is a sequence of ints or 1-D array,
then input is split at each of the indices.
key_chains
The keychains to apply or not apply the method to. Default is ``None``.
to_apply
If True, the method will be applied to key_chains, otherwise key_chains
will be skipped. Default is ``True``.
prune_unapplied
Whether to prune key_chains for which the function was not applied.
Default is ``False``.
map_sequences
Whether to also map method to sequences (lists, tuples).
Default is ``False``.
Returns
-------
ret
list of containers split horizontally from input array.
Examples
--------
>>> ary = ivy.Container(
a = ivy.array(
[[[0., 1.],
[2., 3.]],
[[4., 5.],
[6., 7.]]]
),
b=ivy.array(
[0., 1., 2., 3.,
4., 5., 6., 7.,
8., 9., 10., 11.,
12., 13., 14., 15.]
)
)
>>> ivy.Container.static_hsplit(ary, 2)
[{
a: ivy.array([[[0., 1.]],
[[4., 5.]]]),
b: ivy.array([0., 1., 2., 3., 4., 5., 6., 7.])
}, {
a: ivy.array([[[2., 3.]],
[[6., 7.]]]),
b: ivy.array([8., 9., 10., 11., 12., 13., 14., 15.])
}]
"""
return ContainerBase.cont_multi_map_in_function(
"hsplit",
ary,
indices_or_sections,
copy=copy,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
)
def hsplit(
self: ivy.Container,
indices_or_sections: Union[
int, Sequence[int], ivy.Array, ivy.NativeArray, ivy.Container
],
copy: Optional[Union[bool, ivy.Container]] = None,
/,
) -> List[ivy.Container]:
"""ivy.Container instance method variant of ivy.hsplit. This method
simply wraps the function, and so the docstring for ivy.hsplit also
applies to this method with minimal changes.
Parameters
----------
self
the container with array inputs.
indices_or_sections
If indices_or_sections is an integer n, the array is split into n
equal sections, provided that n must be a divisor of the split axis.
If indices_or_sections is a sequence of ints or 1-D array,
then input is split at each of the indices.
Returns
-------
ret
list of containers split horizontally from input container
Examples
--------
>>> ary = ivy.Container(
a = ivy.array(
[[[0., 1.],
[2., 3.]],
[[4., 5.],
[6., 7.]]]
),
b=ivy.array(
[0., 1., 2., 3.,
4., 5., 6., 7.,
8., 9., 10., 11.,
12., 13., 14., 15.]
)
)
>>> ary.hsplit(2)
[{
a: ivy.array([[[0., 1.]],
[[4., 5.]]]),
b: ivy.array([0., 1., 2., 3., 4., 5., 6., 7.])
}, {
a: ivy.array([[[2., 3.]],
[[6., 7.]]]),
b: ivy.array([8., 9., 10., 11., 12., 13., 14., 15.])
}]
"""
return self.static_hsplit(self, indices_or_sections, copy=copy)
@staticmethod
def static_broadcast_shapes(
shapes: Union[ivy.Container, List[Tuple[int]]],
/,
*,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
out: Optional[ivy.Container] = None,
) -> ivy.Container:
"""ivy.Container static method variant of ivy.broadcast_shapes. This
method simply wraps the function, and so the docstring for ivy.hsplit
also applies to this method with minimal changes.
Parameters
----------
shapes
the container with shapes to broadcast.
key_chains
The keychains to apply or not apply the method to. Default is ``None``.
to_apply
If True, the method will be applied to key_chains, otherwise key_chains
will be skipped. Default is ``True``.
prune_unapplied
Whether to prune key_chains for which the function was not applied.
Default is ``False``.
map_sequences
Whether to also map method to sequences (lists, tuples).
Default is ``False``.
Returns
-------
ret
Container with broadcasted shapes.
Examples
--------
>>> shapes = ivy.Container(a = [(2, 3), (2, 1)],
... b = [(2, 3), (1, 3)],
... c = [(2, 3), (2, 3)],
... d = [(2, 3), (2, 1), (1, 3), (2, 3)])
>>> z = ivy.Container.static_broadcast_shapes(shapes)
>>> print(z)
{
a: (2, 3),
b: (2, 3),
c: (2, 3),
d: (2, 3)
}
"""
return ContainerBase.cont_multi_map_in_function(
"broadcast_shapes",
shapes,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
out=out,
)
def broadcast_shapes(
self: ivy.Container,
/,
*,
out: Optional[ivy.Container] = None,
) -> ivy.Container:
"""ivy.Container instance method variant of ivy.broadcast_shapes. This
method simply wraps the function, and so the docstring for
ivy.broadcast_shapes also applies to this method with minimal changes.
Parameters
----------
self
the container with shapes to broadcast.
Returns
-------
ret
Container with broadcasted shapes.
Examples
--------
>>> shapes = ivy.Container(a = (2, 3, 5),
... b = (2, 3, 1))
>>> z = shapes.broadcast_shapes()
>>> print(z)
{
a: [2, 3, 5],
b: [2, 3, 1]
}
"""
return self.static_broadcast_shapes(self, out=out)
@staticmethod
def static_expand(
x: Union[ivy.Array, ivy.NativeArray, ivy.Container],
shape: Union[ivy.Shape, ivy.NativeShape, ivy.Container],
/,
*,
copy: Optional[Union[bool, ivy.Container]] = None,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
out: Optional[ivy.Container] = None,
) -> ivy.Container:
"""
Parameters
----------
x
input container.
shape
A 1-D Array indicates the shape you want to expand to,
following the broadcast rule.
copy
boolean indicating whether to copy the input array.
If True, the function must always copy.
If False, the function must never copy and must
raise a ValueError in case a copy would be necessary.
If None, the function must reuse existing memory buffer if possible
and copy otherwise. Default: ``None``.
device
key_chains
The keychains to apply or not apply the method to. Default is ``None``.
to_apply
If True, the method will be applied to key_chains, otherwise key_chains
will be skipped. Default is ``True``.
prune_unapplied
Whether to prune key_chains for which the function was not applied.
Default is ``False``.
map_sequences
Whether to also map method to sequences (lists, tuples).
Default is ``False``.
out
optional output array, for writing the result to. It must have a shape
that the inputs broadcast to.
Returns
-------
ret
An output Container with the results.
"""
return ContainerBase.cont_multi_map_in_function(
"expand",
x,
shape,
copy=copy,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
out=out,
)
def expand(
self: Union[ivy.Array, ivy.NativeArray, ivy.Container],
shape: Union[ivy.Shape, ivy.NativeShape, ivy.Container],
/,
*,
copy: Optional[Union[bool, ivy.Container]] = None,
out: Optional[ivy.Container] = None,
) -> ivy.Container:
"""
Parameters
----------
self
input container.
shape
A 1-D Array indicates the shape you want to expand to,
following the broadcast rule.
copy
boolean indicating whether to copy the input array.
If True, the function must always copy.
If False, the function must never copy and must
raise a ValueError in case a copy would be necessary.
If None, the function must reuse existing memory buffer if possible
and copy otherwise. Default: ``None``.
device
out
optional output array, for writing the result to. It must have a shape
that the inputs broadcast to.
Returns
-------
ret
An output Container with the results.
"""
return self.static_expand(self, shape, copy=copy, out=out)
@staticmethod
def static_as_strided(
x: Union[ivy.Array, ivy.NativeArray, ivy.Container],
shape: Union[ivy.Shape, ivy.NativeShape, Sequence[int], ivy.Container],
strides: Union[Sequence[int], ivy.Container],
/,
*,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
) -> ivy.Container:
"""ivy.Container instance method variant of ivy.as_strided. This method
simply wraps the function, and so the docstring for ivy.as_strided also
applies to this method with minimal changes.
Parameters
----------
x
Input container.
shape
The shape of the new arrays.
strides
The strides of the new arrays (specified in bytes).
key_chains
The keychains to apply or not apply the method to. Default is ``None``.
to_apply
If True, the method will be applied to key_chains, otherwise key_chains
will be skipped. Default is ``True``.
prune_unapplied
Whether to prune key_chains for which the function was not applied.
Default is ``False``.
map_sequences
Whether to also map method to sequences (lists, tuples).
Default is ``False``.
Returns
-------
ret
Output container.
"""
return ContainerBase.cont_multi_map_in_function(
"as_strided",
x,
shape,
strides,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
)
def as_strided(
self: Union[ivy.Array, ivy.NativeArray, ivy.Container],
shape: Union[ivy.Shape, ivy.NativeShape, Sequence[int], ivy.Container],
strides: Union[Sequence[int], ivy.Container],
/,
) -> ivy.Container:
"""ivy.Container instance method variant of ivy.as_strided. This method
simply wraps the function, and so the docstring for ivy.as_strided also
applies to this method with minimal changes.
Parameters
----------
self
Input container.
shape
The shape of the new arrays.
strides
The strides of the new arrays (specified in bytes).
Returns
-------
ret
Output container.
"""
return self.static_as_strided(self, shape, strides)
@staticmethod
def static_concat_from_sequence(
input_sequence: Union[
Tuple[Union[ivy.Array, ivy.NativeArray, ivy.Container]],
List[Union[ivy.Array, ivy.NativeArray, ivy.Container]],
],
/,
*,
new_axis: Union[int, ivy.Container] = 0,
axis: Union[int, ivy.Container] = 0,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
out: Optional[ivy.Container] = None,
) -> ivy.Container:
"""ivy.Container static method variant of ivy.concat_from_sequence.
This method simply wraps the function, and so the docstring for
ivy.concat_from_sequence also applies to this method with minimal
changes.
Parameters
----------
input_sequence
Container with leaves to join. Each array leave must have the same shape.
new_axis
Insert and concatenate on a new axis or not,
default 0 means do not insert new axis.
new_axis = 0: concatenate
new_axis = 1: stack
axis
axis along which the array leaves will be concatenated. More details
can be found in the docstring for ivy.concat_from_sequence.
key_chains
The key-chains to apply or not apply the method to. Default is ``None``.
to_apply
If True, the method will be applied to key_chains, otherwise key_chains
will be skipped. Default is ``True``.
prune_unapplied
Whether to prune key_chains for which the function was not applied.
Default is ``False``.
map_sequences
Whether to also map method to sequences (lists, tuples).
Default is ``False``.
out
optional output array, for writing the result to. It must have a shape
that the inputs broadcast to.
Returns
-------
ret
an output container with the results.
Examples
--------
>>> x = ivy.Container(a=ivy.array([[0, 1], [2,3]]), b=ivy.array([[4, 5]]))
>>> z = ivy.Container.static_concat_from_sequence(x,new_axis = 1, axis = 1)
>>> print(z)
{
a: ivy.array([[0, 2],
[1, 3]]),
b: ivy.array([[4],
[5]])
}
>>> x = ivy.Container(a=ivy.array([[0, 1], [2,3]]), b=ivy.array([[4, 5]]))
>>> y = ivy.Container(a=ivy.array([[3, 2], [1,0]]), b=ivy.array([[1, 0]]))
>>> z = ivy.Container.static_concat_from_sequence([x,y])
>>> print(z)
{
a: ivy.array([[0, 1],
[2, 3],
[3, 2],
[1, 0]]),
b: ivy.array([[4, 5],
[1, 0]])
}
>>> x = ivy.Container(a=ivy.array([[0, 1], [2,3]]), b=ivy.array([[4, 5]]))
>>> y = ivy.Container(a=ivy.array([[3, 2], [1,0]]), b=ivy.array([[1, 0]]))
>>> z = ivy.Container.static_concat_from_sequence([x,y],new_axis=1, axis=1)
>>> print(z)
{
a: ivy.array([[[0, 1],
[3, 2]],
[[2, 3],
[1, 0]]]),
b: ivy.array([[[4, 5],
[1, 0]]])
}
"""
return ContainerBase.cont_multi_map_in_function(
"concat_from_sequence",
input_sequence,
new_axis=new_axis,
axis=axis,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
out=out,
)
def concat_from_sequence(
self: ivy.Container,
/,
input_sequence: Union[
Tuple[Union[ivy.Array, ivy.NativeArray, ivy.Container]],
List[Union[ivy.Array, ivy.NativeArray, ivy.Container]],
],
*,
new_axis: Union[int, ivy.Container] = 0,
axis: Union[int, ivy.Container] = 0,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
out: Optional[ivy.Container] = None,
) -> ivy.Container:
"""ivy.Container instance method variant of ivy.stack. This method
simply wraps the function, and so the docstring for ivy.stack also
applies to this method with minimal changes.
Parameters
----------
self
Container with leaves to join with leaves of other arrays/containers.
Each array leave must have the same shape.
input_sequence
Container with other leaves to join.
Each array leave must have the same shape.
new_axis
Insert and concatenate on a new axis or not,
default 0 means do not insert new axis.
new_axis = 0: concatenate
new_axis = 1: stack
axis
axis along which the array leaves will be concatenated. More details can
be found in the docstring for ivy.stack.
key_chains
The key-chains to apply or not apply the method to. Default is ``None``.
to_apply
If True, the method will be applied to key_chains, otherwise key_chains
will be skipped. Default is ``True``.
prune_unapplied
Whether to prune key_chains for which the function was not applied.
Default is ``False``.
map_sequences
Whether to also map method to sequences (lists, tuples).
Default is ``False``.
out
optional output array, for writing the result to. It must have a shape
that the inputs broadcast to.
Returns
-------
ret
an output container with the results.
Examples
--------
>>> x = ivy.Container(a=ivy.array([[0, 1], [2,3]]), b=ivy.array([[4, 5]]))
>>> y = ivy.Container(a=ivy.array([[3, 2], [1,0]]), b=ivy.array([[1, 0]]))
>>> z = ivy.Container.static_concat_from_sequence([x,y],axis=1)
>>> print(z)
{
a: ivy.array([[[0, 1],
[3, 2]],
[[2, 3],
[1, 0]]]),
b: ivy.array([[[4, 5],
[1, 0]]])
}
"""
new_input_sequence = (
input_sequence.cont_copy()
if ivy.is_ivy_container(input_sequence)
else input_sequence.copy()
)
new_input_sequence.insert(0, self.cont_copy())
return self.concat_from_sequence(
new_input_sequence,
new_axis=new_axis,
axis=axis,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
out=out,
)
def associative_scan(
self: Union[ivy.Array, ivy.NativeArray, ivy.Container],
fn: Union[Callable, ivy.Container],
/,
*,
reverse: Union[bool, ivy.Container] = False,
axis: Union[int, ivy.Container] = 0,
) -> ivy.Container:
"""ivy.Container instance method variant of ivy.associative_scan. This
method simply wraps the function, and so the docstring for
ivy.associative_scan also applies to this method with minimal changes.
Parameters
----------
self
The Container to scan over.
fn
The associative function to apply.
reverse
Whether to scan in reverse with respect to the given axis.
axis
The axis to scan over.
Returns
-------
ret
The result of the scan.
"""
return ivy.associative_scan(self, fn, reverse=reverse, axis=axis)
@staticmethod
def _static_unique_consecutive(
x: Union[ivy.Array, ivy.NativeArray, ivy.Container],
/,
*,
axis: Optional[Union[int, ivy.Container]] = None,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
) -> ivy.Container:
"""ivy.Container static method variant of ivy.unique_consecutive.
This method simply wraps the function, and so the docstring for
ivy.unique_consecutive also applies to this method with minimal
changes.
"""
return ContainerBase.cont_multi_map_in_function(
"unique_consecutive",
x,
axis=axis,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
)
def unique_consecutive(
self: ivy.Container,
/,
*,
axis: Optional[Union[int, ivy.Container]] = None,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
) -> ivy.Container:
"""ivy.Container instance method variant of ivy.unique_consecutive.
This method simply wraps the function, and so the docstring for
ivy.unique_consecutive also applies to this method with minimal
changes.
"""
return self._static_unique_consecutive(
self,
axis=axis,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
)
@staticmethod
def _static_fill_diagonal(
a: Union[ivy.Array, ivy.NativeArray, ivy.Container],
v: Union[ivy.Array, ivy.NativeArray, ivy.Container],
/,
*,
wrap: Union[bool, ivy.Container] = False,
) -> ivy.Container:
"""ivy.Container static method variant of ivy.fill_diagonal.
This method simply wraps the function, and so the docstring for
ivy.fill_diagonal also applies to this method with minimal
changes.
"""
return ContainerBase.cont_multi_map_in_function(
"fill_diagonal",
a,
v,
wrap=wrap,
)
def fill_diagonal(
self: ivy.Container,
v: Union[int, float, ivy.Container],
/,
*,
wrap: Union[bool, ivy.Container] = False,
) -> ivy.Container:
"""ivy.Container instance method variant of ivy.fill_diagonal.
This method simply wraps the function, and so the docstring for
ivy.fill_diagonal also applies to this method with minimal
changes.
"""
return self._static_fill_diagonal(
self,
v,
wrap=wrap,
)
@staticmethod
def static_unfold(
x: Union[ivy.Array, ivy.NativeArray, ivy.Container],
/,
mode: Optional[Union[int, ivy.Container]] = 0,
*,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
out: Optional[ivy.Container] = None,
) -> ivy.Container:
"""ivy.Container static method variant of ivy.unfold.
This method simply wraps the function, and so the docstring for
ivy.unfold also applies to this method with minimal
changes.
Parameters
----------
x
input tensor to be unfolded
mode
indexing starts at 0, therefore mode is in ``range(0, tensor.ndim)``
out
optional output container, for writing the result to. It must have a shape
that the inputs broadcast to.
Returns
-------
ret
Container of unfolded tensors
"""
return ContainerBase.cont_multi_map_in_function(
"unfold",
x,
mode,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
out=out,
)
def unfold(
self: ivy.Container,
/,
mode: Optional[Union[int, ivy.Container]] = 0,
*,
out: Optional[ivy.Container] = None,
) -> ivy.Container:
"""ivy.Container instance method variant of ivy.unfold.
This method simply wraps the function, and so the docstring for
ivy.unfold also applies to this method with minimal
changes.
Parameters
----------
self
input tensor to be unfolded
mode
indexing starts at 0, therefore mode is in ``range(0, tensor.ndim)``
out
optional output container, for writing the result to. It must have a shape
that the inputs broadcast to.
Returns
-------
ret
Container of unfolded tensors
"""
return self.static_unfold(self, mode, out=out)
@staticmethod
def static_fold(
x: Union[ivy.Array, ivy.NativeArray, ivy.Container],
/,
mode: Union[int, ivy.Container],
shape: Union[ivy.Shape, ivy.NativeShape, Sequence[int], ivy.Container],
*,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
out: Optional[ivy.Container] = None,
) -> ivy.Container:
"""ivy.Container static method variant of ivy.fold.
This method simply wraps the function, and so the docstring for
ivy.fold also applies to this method with minimal
changes.
Parameters
----------
x
input tensor to be unfolded
mode
indexing starts at 0, therefore mode is in ``range(0, tensor.ndim)``
out
optional output container, for writing the result to. It must have a shape
that the inputs broadcast to.
Returns
-------
ret
Container of folded tensors
"""
return ContainerBase.cont_multi_map_in_function(
"fold",
x,
mode,
shape,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
out=out,
)
def fold(
self: ivy.Container,
/,
mode: Union[int, ivy.Container],
shape: Union[ivy.Shape, ivy.NativeShape, Sequence[int], ivy.Container],
*,
out: Optional[ivy.Container] = None,
) -> ivy.Container:
"""ivy.Container instance method variant of ivy.fold.
This method simply wraps the function, and so the docstring for
ivy.fold also applies to this method with minimal
changes.
Parameters
----------
self
input tensor to be folded
mode
indexing starts at 0, therefore mode is in ``range(0, tensor.ndim)``
shape
shape of the original tensor before unfolding
out
optional output container, for writing the result to. It must have a shape
that the inputs broadcast to.
-------
ret
Container of folded tensors
"""
return self.static_fold(self, mode, shape, out=out)
@staticmethod
def static_partial_unfold(
x: Union[ivy.Array, ivy.NativeArray, ivy.Container],
/,
mode: Optional[Union[int, ivy.Container]] = 0,
skip_begin: Optional[Union[int, ivy.Container]] = 1,
skip_end: Optional[Union[int, ivy.Container]] = 0,
ravel_tensors: Optional[Union[bool, ivy.Container]] = False,
*,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
out: Optional[ivy.Container] = None,
) -> ivy.Container:
"""ivy.Container static method variant of ivy.partial_unfold.
This method simply wraps the function, and so the docstring for
ivy.partial_unfold also applies to this method with minimal
changes.
Parameters
----------
x
tensor of shape n_samples x n_1 x n_2 x ... x n_i
mode
indexing starts at 0, therefore mode is in range(0, tensor.ndim)
skip_begin
number of dimensions to leave untouched at the beginning
skip_end
number of dimensions to leave untouched at the end
ravel_tensors
if True, the unfolded tensors are also flattened
out
optional output container, for writing the result to. It must have a shape
that the inputs broadcast to.
Returns
-------
ret
partially unfolded tensor
"""
return ContainerBase.cont_multi_map_in_function(
"partial_unfold",
x,
mode,
skip_begin,
skip_end,
ravel_tensors,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
out=out,
)
def partial_unfold(
self: Union[ivy.Array, ivy.NativeArray, ivy.Container],
/,
mode: Optional[Union[int, ivy.Container]] = 0,
skip_begin: Optional[Union[int, ivy.Container]] = 1,
skip_end: Optional[Union[int, ivy.Container]] = 0,
ravel_tensors: Optional[Union[bool, ivy.Container]] = False,
*,
out: Optional[ivy.Container] = None,
) -> ivy.Container:
"""ivy.Container instance method variant of ivy.partial_unfold.
This method simply wraps the function, and so the docstring for
ivy.partial_unfold also applies to this method with minimal
changes.
Parameters
----------
self
tensor of shape n_samples x n_1 x n_2 x ... x n_i
mode
indexing starts at 0, therefore mode is in range(0, tensor.ndim)
skip_begin
number of dimensions to leave untouched at the beginning
skip_end
number of dimensions to leave untouched at the end
ravel_tensors
if True, the unfolded tensors are also flattened
out
optional output container, for writing the result to. It must have a shape
that the inputs broadcast to.
Returns
-------
ret
partially unfolded tensor
"""
return self.static_partial_unfold(
self, mode, skip_begin, skip_end, ravel_tensors, out=out
)
@staticmethod
def static_partial_fold(
x: Union[ivy.Array, ivy.NativeArray, ivy.Container],
/,
mode: Union[int, ivy.Container],
shape: Union[ivy.Shape, ivy.NativeShape, Sequence[int], ivy.Container],
skip_begin: Optional[Union[int, ivy.Container]] = 1,
*,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
out: Optional[ivy.Container] = None,
) -> ivy.Container:
"""ivy.Container static method variant of ivy.partial_fold.
This method simply wraps the function, and so the docstring for
ivy.partial_fold also applies to this method with minimal
changes.
Parameters
----------
x
a partially unfolded tensor
mode
indexing starts at 0, therefore mode is in range(0, tensor.ndim)
shape
the shape of the original full tensor (including skipped dimensions)
skip_begin
number of dimensions left untouched at the beginning
out
optional output container, for writing the result to. It must have a shape
that the inputs broadcast to.
Returns
-------
ret
partially re-folded tensor
"""
return ContainerBase.cont_multi_map_in_function(
"partial_fold",
x,
mode,
shape,
skip_begin,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
out=out,
)
def partial_fold(
self: Union[ivy.Array, ivy.NativeArray],
/,
mode: Union[int, ivy.Container],
shape: Union[ivy.Shape, ivy.NativeShape, Sequence[int], ivy.Container],
skip_begin: Optional[Union[int, ivy.Container]] = 1,
*,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
out: Optional[ivy.Container] = None,
) -> ivy.Container:
"""ivy.Container instance method variant of ivy.partial_fold.
This method simply wraps the function, and so the docstring for
ivy.partial_fold also applies to this method with minimal
changes.
Parameters
----------
self
a partially unfolded tensor
mode
indexing starts at 0, therefore mode is in range(0, tensor.ndim)
shape
the shape of the original full tensor (including skipped dimensions)
skip_begin
number of dimensions left untouched at the beginning
out
optional output container, for writing the result to. It must have a shape
that the inputs broadcast to.
Returns
-------
ret
partially re-folded tensor
"""
return self.static_partial_fold(self, mode, shape, skip_begin, out=out)
@staticmethod
def static_partial_tensor_to_vec(
x: Union[ivy.Array, ivy.NativeArray, ivy.Container],
/,
skip_begin: Optional[Union[int, ivy.Container]] = 1,
skip_end: Optional[Union[int, ivy.Container]] = 0,
*,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
out: Optional[ivy.Container] = None,
) -> ivy.Container:
"""ivy.Container static method variant of ivy.partial_tensor_to_vec.
This method simply wraps the function, and so the docstring for
ivy.partial_tensor_to_vec also applies to this method with minimal
changes.
Parameters
----------
x
tensor to partially vectorise
skip_begin
number of dimensions to leave untouched at the beginning
skip_end
number of dimensions to leave untouched at the end
out
optional output container, for writing the result to. It must have a shape
that the inputs broadcast to.
Returns
-------
ret
partially vectorised tensor with the
`skip_begin` first and `skip_end` last dimensions untouched
"""
return ContainerBase.cont_multi_map_in_function(
"partial_tensor_to_vec",
x,
skip_begin,
skip_end,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
out=out,
)
def partial_tensor_to_vec(
self: Union[ivy.Array, ivy.NativeArray, ivy.Container],
/,
skip_begin: Optional[Union[int, ivy.Container]] = 1,
skip_end: Optional[Union[int, ivy.Container]] = 0,
*,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
out: Optional[ivy.Container] = None,
) -> ivy.Container:
"""ivy.Container instance method variant of ivy.partial_tensor_to_vec.
This method simply wraps the function, and so the docstring for
ivy.partial_tensor_to_vec also applies to this method with minimal
changes.
Parameters
----------
self
tensor to partially vectorise
skip_begin
number of dimensions to leave untouched at the beginning
skip_end
number of dimensions to leave untouched at the end
out
optional output container, for writing the result to. It must have a shape
that the inputs broadcast to.
Returns
-------
ret
partially re-folded tensor
"""
return self.static_partial_tensor_to_vec(self, skip_begin, skip_end, out=out)
@staticmethod
def static_partial_vec_to_tensor(
x: Union[ivy.Array, ivy.NativeArray, ivy.Container],
/,
shape: Union[ivy.Shape, ivy.NativeShape, Sequence[int], ivy.Container],
skip_begin: Optional[Union[int, ivy.Container]] = 1,
*,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
out: Optional[ivy.Container] = None,
) -> ivy.Container:
"""ivy.Container static method variant of ivy.partial_vec_to_tensor.
This method simply wraps the function, and so the docstring for
ivy.partial_vec_to_tensor also applies to this method with minimal
changes.
Parameters
----------
x
a partially vectorised tensor
shape
the shape of the original full tensor (including skipped dimensions)
skip_begin
number of dimensions to leave untouched at the beginning
out
optional output container, for writing the result to. It must have a shape
that the inputs broadcast to.
Returns
-------
full tensor
"""
return ContainerBase.cont_multi_map_in_function(
"partial_vec_to_tensor",
x,
shape,
skip_begin,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
out=out,
)
def partial_vec_to_tensor(
self: Union[ivy.Array, ivy.NativeArray, ivy.Container],
/,
shape: Union[ivy.Shape, ivy.NativeShape, Sequence[int], ivy.Container],
skip_begin: Optional[Union[int, ivy.Container]] = 1,
*,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
out: Optional[ivy.Container] = None,
) -> ivy.Container:
"""ivy.Container instance method variant of ivy.partial_vec_to_tensor.
This method simply wraps the function, and so the docstring for
ivy.partial_vec_to_tensor also applies to this method with minimal
changes.
Parameters
----------
self
partially vectorized tensor
shape
the shape of the original full tensor (including skipped dimensions)
skip_begin
number of dimensions to leave untouched at the beginning
out
optional output container, for writing the result to. It must have a shape
that the inputs broadcast to.
Returns
-------
ret
full tensor
"""
return self.static_partial_vec_to_tensor(self, shape, skip_begin, out=out)
@staticmethod
def static_matricize(
x: Union[ivy.Array, ivy.NativeArray, ivy.Container],
/,
row_modes: Union[Sequence[int], ivy.Container],
column_modes: Optional[Union[Sequence[int], ivy.Container]] = None,
*,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
out: Optional[ivy.Container] = None,
) -> ivy.Container:
"""ivy.Container static method variant of ivy.matricize.
This method simply wraps the function, and so the docstring for
ivy.matricize also applies to this method with minimal
changes.
Parameters
----------
x
the input tensor
row_modes
modes to use as row of the matrix (in the desired order)
column_modes
modes to use as column of the matrix, in the desired order
if None, the modes not in `row_modes` will be used in ascending order
out
optional output array, for writing the result to.
ret
-------
ivy.Container
"""
return ContainerBase.cont_multi_map_in_function(
"matricize",
x,
row_modes,
column_modes,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
out=out,
)
def matricize(
self: Union[ivy.Array, ivy.NativeArray, ivy.Container],
/,
row_modes: Union[Sequence[int], ivy.Container],
column_modes: Optional[Union[Sequence[int], ivy.Container]] = None,
*,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
out: Optional[ivy.Container] = None,
) -> ivy.Container:
"""ivy.Container instance method variant of ivy.matricize.
This method simply wraps the function, and so the docstring for
ivy.matricize also applies to this method with minimal
changes.
Parameters
----------
self
the input tensor
row_modes
modes to use as row of the matrix (in the desired order)
column_modes
modes to use as column of the matrix, in the desired order
if None, the modes not in `row_modes` will be used in ascending order
out
optional output array, for writing the result to.
ret
-------
ivy.Container
"""
return self.static_matricize(self, row_modes, column_modes, out=out)
@staticmethod
def static_soft_thresholding(
x: Union[ivy.Array, ivy.NativeArray, ivy.Container],
/,
threshold: Union[float, ivy.Array, ivy.NativeArray, ivy.Container],
*,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
out: Optional[ivy.Container] = None,
) -> ivy.Container:
"""ivy.Container static method variant of ivy.soft_thresholding.
This method simply wraps the function, and so the docstring for
ivy.soft_thresholding also applies to this method with minimal
changes.
Parameters
----------
x
the input tensor
threshold
float or array with shape tensor.shape
* If float the threshold is applied to the whole tensor
* If array, one threshold is applied per elements, 0 values are ignored
out
optional output array, for writing the result to.
Returns
-------
ivy.Container
thresholded tensor on which the operator has been applied
"""
return ContainerBase.cont_multi_map_in_function(
"soft_thresholding",
x,
threshold,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
out=out,
)
def soft_thresholding(
self: Union[ivy.Array, ivy.NativeArray, ivy.Container],
/,
threshold: Union[float, ivy.Array, ivy.NativeArray, ivy.Container],
*,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
out: Optional[ivy.Container] = None,
) -> ivy.Container:
"""ivy.Container instance method variant of ivy.soft_thresholding.
This method simply wraps the function, and so the docstring for
ivy.soft_thresholding also applies to this method with minimal
changes.
Parameters
----------
x
the input tensor
threshold
float or array with shape tensor.shape
* If float the threshold is applied to the whole tensor
* If array, one threshold is applied per elements, 0 values are ignored
out
optional output array, for writing the result to.
Returns
-------
ivy.Container
thresholded tensor on which the operator has been applied
"""
return self.static_soft_thresholding(self, threshold, out=out)
@staticmethod
def static_column_stack(
xs: Sequence[Union[ivy.Array, ivy.NativeArray, ivy.Container]],
/,
*,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
out: Optional[ivy.Container] = None,
) -> ivy.Container:
"""ivy.Container static method variant of ivy.column_stack.
This method simply wraps the function, and so the docstring for
ivy.column_stack also applies to this method with minimal
changes.
Parameters
----------
xs
Container with leaves to stack.
key_chains
The key-chains to apply or not apply the method to. Default is ``None``.
to_apply
If True, the method will be applied to key_chains, otherwise key_chains
will be skipped. Default is ``True``.
prune_unapplied
Whether to prune key_chains for which the function was not applied.
Default is ``False``.
map_sequences
Whether to also map method to sequences (lists, tuples).
Default is ``False``.
out
Optional output array, for writing the result to.
Returns
-------
ret
An output container with the results.
"""
return ContainerBase.cont_multi_map_in_function(
"column_stack",
xs,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
out=out,
)
def column_stack(
self: ivy.Container,
/,
xs: Sequence[Union[ivy.Array, ivy.NativeArray, ivy.Container]],
*,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
out: Optional[ivy.Container] = None,
) -> ivy.Container:
"""ivy.Container instance method variant of ivy.column_stack.
This method simply wraps the function, and so the docstring for
ivy.column_stack also applies to this method with minimal
changes.
Parameters
----------
self
Container with leaves to stack with leaves of other arrays/containers.
xs
Container with other leaves to join.
key_chains
The key-chains to apply or not apply the method to. Default is ``None``.
to_apply
If True, the method will be applied to key_chains, otherwise key_chains
will be skipped. Default is ``True``.
prune_unapplied
Whether to prune key_chains for which the function was not applied.
Default is ``False``.
map_sequences
Whether to also map method to sequences (lists, tuples).
Default is ``False``.
out
Optional output array, for writing the result to.
Returns
-------
ret
An output container with the results.
"""
new_xs = xs.cont_copy() if ivy.is_ivy_container(xs) else list(xs).copy()
new_xs.insert(0, self.cont_copy())
return self.static_column_stack(
new_xs,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
out=out,
)
@staticmethod
def _static_put_along_axis(
arr: Union[ivy.Array, ivy.NativeArray, ivy.Container],
indices: Union[ivy.Array, ivy.NativeArray, ivy.Container],
values: Union[ivy.Array, ivy.NativeArray, ivy.Container],
axis: Union[int, ivy.Container],
/,
*,
mode: Optional[
Union[Literal["sum", "min", "max", "mul", "mean", "replace"], ivy.Container]
] = "replace",
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: bool = True,
prune_unapplied: bool = False,
map_sequences: bool = False,
out: Optional[ivy.Container] = None,
) -> ivy.Container:
"""ivy.Container static method variant of ivy.put_along_axis.
This method simply wraps the function, and so the docstring for
ivy.put_along_axis also applies to this method with minimal
changes.
"""
return ContainerBase.cont_multi_map_in_function(
"put_along_axis",
arr,
indices,
values,
axis,
mode=mode,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
out=out,
)
def put_along_axis(
self: ivy.Container,
indices: Union[ivy.Array, ivy.NativeArray, ivy.Container],
values: Union[ivy.Array, ivy.NativeArray, ivy.Container],
axis: Union[int, ivy.Container],
/,
*,
mode: Optional[
Union[Literal["sum", "min", "max", "mul", "mean", "replace"], ivy.Container]
] = "replace",
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: bool = True,
prune_unapplied: bool = False,
map_sequences: bool = False,
out: Optional[ivy.Container] = None,
) -> ivy.Container:
"""ivy.Container instance method variant of ivy.put_along_axis.
This method simply wraps the function, and so the docstring for
ivy.put_along_axis also applies to this method with minimal
changes.
"""
return self._static_put_along_axis(
self,
indices,
values,
axis,
mode=mode,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
out=out,
)
@staticmethod
def _static_take(
x: Union[int, ivy.Array, ivy.NativeArray, ivy.Container],
indices: Union[int, ivy.Array, ivy.NativeArray, ivy.Container],
/,
*,
axis: Optional[Union[int, ivy.Container]] = None,
mode: Union[str, ivy.Container] = "fill",
fill_value: Optional[Union[Number, ivy.Container]] = None,
out: Optional[Union[ivy.Array, ivy.Container]] = None,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
) -> ivy.Container:
"""ivy.Container static method variant of ivy.take.
This method simply wraps the function, and so the docstring for
ivy.take also applies to this method with minimal changes.
Parameters
----------
x
input array
indices
array indices. Must have an integer data type.
axis
axis over which to select values. If `axis` is negative,
the function must determine the axis along which to select values
by counting from the last dimension.
By default, the flattened input array is used.
mode
specifies how out-of-bounds `indices` will behave.
- ‘raise’ – raise an error
- ‘wrap’ – wrap around
- ‘clip’ – clip to the range (all indices that are too large are
replaced by the index that addresses the last element along that axis.
Note that this disables indexing with negative numbers.)
- 'fill' (default) = returns invalid values (e.g. NaN)
for out-of bounds indices (see also fill_value below)
fill_value
fill value to return for out-of-bounds slices
(Defaults to NaN for inexact types,
the largest negative value for signed types,
the largest positive value for unsigned types, and True for booleans.)
out
optional output array, for writing the result to. It must
have a shape that the inputs broadcast to.
key_chains
The key-chains to apply or not apply the method to.
Default is ``None``.
to_apply
If True, the method will be applied to key_chains,
otherwise key_chains will be skipped. Default is ``True``.
prune_unapplied
Whether to prune key_chains for which the function was
not applied. Default is ``False``.
map_sequences
Whether to also map method to sequences (lists, tuples).
Default is ``False``.
Returns
-------
ret
an array having the same data type as `x`.
The output array must have the same rank
(i.e., number of dimensions) as `x` and
must have the same shape as `x`,
except for the axis specified by `axis`
whose size must equal the number of elements in `indices`.
Examples
--------
With `ivy.Container` input:
>>> x = ivy.Container(a=ivy.array([True,False,False]),
... b=ivy.array([2.3,4.5,6.7]),
... c=ivy.array([1,2,3]))
>>> indices = ivy.array([[1,9,2]])
>>> y = ivy.Container._static_take(x, indices)
>>> print(y)
{
a: ivy.array([[False, True, False]]),
b: ivy.array([[4.5, nan, 6.69999981]]),
c: ivy.array([[2, -2147483648, 3]])
}
"""
return ContainerBase.cont_multi_map_in_function(
"take",
x,
indices,
axis=axis,
mode=mode,
fill_value=fill_value,
out=out,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
)
def take(
self: Union[int, ivy.Array, ivy.NativeArray, ivy.Container],
indices: Union[int, ivy.Array, ivy.NativeArray, ivy.Container],
/,
*,
axis: Optional[Union[int, ivy.Container]] = None,
mode: Union[str, ivy.Container] = "fill",
fill_value: Optional[Union[Number, ivy.Container]] = None,
out: Optional[Union[ivy.Array, ivy.Container]] = None,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
) -> ivy.Container:
"""ivy.Container instance method variant of ivy.take.
This method simply wraps the function, and so the docstring for
ivy.take also applies to this method with minimal changes.
Parameters
----------
self
input array
indices
array indices. Must have an integer data type.
axis
axis over which to select values. If `axis` is negative,
the function must determine the axis along which to select values
by counting from the last dimension.
By default, the flattened input array is used.
mode
specifies how out-of-bounds `indices` will behave.
- ‘raise’ – raise an error
- ‘wrap’ – wrap around
- ‘clip’ – clip to the range (all indices that are too large are
replaced by the index that addresses the last element along that axis.
Note that this disables indexing with negative numbers.)
- 'fill' (default) = returns invalid values (e.g. NaN)
for out-of bounds indices (see also fill_value below)
fill_value
fill value to return for out-of-bounds slices
(Defaults to NaN for inexact types,
the largest negative value for signed types,
the largest positive value for unsigned types, and True for booleans.)
out
optional output array, for writing the result to. It must
have a shape that the inputs broadcast to.
key_chains
The key-chains to apply or not apply the method to.
Default is ``None``.
to_apply
If True, the method will be applied to key_chains,
otherwise key_chains will be skipped. Default is ``True``.
prune_unapplied
Whether to prune key_chains for which the function was
not applied. Default is ``False``.
map_sequences
Whether to also map method to sequences (lists, tuples).
Default is ``False``.
Returns
-------
ret
an array having the same data type as `x`.
The output array must have the same rank
(i.e., number of dimensions) as `x` and
must have the same shape as `x`,
except for the axis specified by `axis`
whose size must equal the number of elements in `indices`.
Examples
--------
With `ivy.Container` input:
>>> x = ivy.Container(a=ivy.array([True,False,False]),
... b=ivy.array([2.3,4.5,6.7]),
... c=ivy.array([1,2,3]))
>>> indices = ivy.array([[1,9,2]])
>>> y = x.take(indices)
>>> print(y)
{
a: ivy.array([[False, True, False]]),
b: ivy.array([[4.5, nan, 6.69999981]]),
c: ivy.array([[2, -2147483648, 3]])
}
"""
return self._static_take(
self,
indices,
axis=axis,
mode=mode,
fill_value=fill_value,
out=out,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
)
@staticmethod
def _static_trim_zeros(
a: Union[ivy.Array, ivy.NativeArray, ivy.Container],
/,
*,
trim: Optional[str] = "fb",
) -> ivy.Container:
"""ivy.Container static method variant of ivy.trim_zeros. This method
simply wraps the function, and so the docstring for ivy.trim_zeros also
applies to this method with minimal changes.
Parameters
----------
self : 1-D array
Input array.
trim : str, optional
A string with 'f' representing trim from front and 'b' to trim from
back. Default is 'fb', trim zeros from both front and back of the
array.
Returns
-------
1-D array
The result of trimming the input. The input data type is preserved.
Examples
--------
>>> a = ivy.array([0, 0, 0, 0, 8, 3, 0, 0, 7, 1, 0])
>>> ivy.trim_zeros(a)
array([8, 3, 0, 0, 7, 1])
>>> ivy.trim_zeros(a, 'b')
array([0, 0, 0, 0, 8, 3, 0, 0, 7, 1])
>>> ivy.trim_zeros([0, 8, 3, 0, 0])
[8, 3]
"""
return ContainerBase.cont_multi_map_in_function(a, trim)
def trim_zeros(
self: ivy.Container,
/,
*,
trim: Optional[str] = "fb",
) -> ivy.Array:
"""ivy.Container instance method variant of ivy.trim_zeros. This method
simply wraps the function, and so the docstring for ivy.trim_zeros also
applies to this method with minimal changes.
Parameters
----------
self : 1-D array
Input array.
trim : str, optional
A string with 'f' representing trim from front and 'b' to trim from
back. Default is 'fb', trim zeros from both front and back of the
array.
Returns
-------
1-D array
The result of trimming the input. The input data type is preserved.
Examples
--------
>>> a = ivy.array([0, 0, 0, 0, 8, 3, 0, 0, 7, 1, 0])
>>> ivy.trim_zeros(a)
array([8, 3, 0, 0, 7, 1])
>>> ivy.trim_zeros(a, 'b')
array([0, 0, 0, 0, 8, 3, 0, 0, 7, 1])
>>> ivy.trim_zeros([0, 8, 3, 0, 0])
[8, 3]
"""
return self._static_trim_zeros(self, trim=trim)
@staticmethod
def _static_unflatten(
x: Union[int, ivy.Array, ivy.NativeArray, ivy.Container],
/,
shape: Union[Tuple[int], ivy.Array, ivy.NativeArray, ivy.Container],
dim: Optional[Union[int, ivy.Container]] = 0,
*,
out: Optional[Union[ivy.Array, ivy.Container]] = None,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
) -> ivy.Container:
"""ivy.Container static method variant of ivy.unflatten. This method
simply wraps the function, and so the docstring for ivy.unflatten also
applies to this method with minimal changes.
Parameters
----------
x
input array
shape
array indices. Must have an integer data type.
dim
axis over which to select values. If `axis` is negative,
the function must determine the axis along which to select values
by counting from the last dimension.
By default, the flattened input array is used.
out
optional output array, for writing the result to. It must
have a shape that the inputs broadcast to.
key_chains
The key-chains to apply or not apply the method to.
Default is ``None``.
to_apply
If True, the method will be applied to key_chains,
otherwise key_chains will be skipped. Default is ``True``.
prune_unapplied
Whether to prune key_chains for which the function was
not applied. Default is ``False``.
map_sequences
Whether to also map method to sequences (lists, tuples).
Default is ``False``.
Returns
-------
ret
an array having the same data type as `x`.
The output array must have the same rank
(i.e., number of dimensions) as `x` and
must have the same shape as `x`,
except for the axis specified by `axis`
whose size must equal the number of elements in `indices`.
Examples
--------
With 'ivy.Container' input:
>>> x = ivy.Container(a = ivy.array([[True, False, False, True],
[False, True, False, True]])),
... b = ivy.array([[1.2, 2.3, 3.4, 4.5],
[5.6, 6.7, 7.8, 8.9]]),
... c = ivy.array([[1, 2, 3, 4],
[5, 6, 7, 8]]))
>>> dim = 1
>>> shape = (2, 2)
>>> y = ivy.Container._static_unflatten(x, shape=shape, dim=dim)
>>> print(y)
{
a: ivy.array([[[True, False], [False, True]],
[[False, True], [False, True]]])
b: ivy.array([[[1.2, 2.3], [3.4, 4.5]], [[5.6, 6.7], [7.8, 8.9]]])
c: ivy.array([[[1, 2], [3, 4]], [[5, 6], [7, 8]]])
}
"""
return ContainerBase.cont_multi_map_in_function(
"unflatten",
x,
shape=shape,
dim=dim,
out=out,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
)
def unflatten(
self: ivy.Container,
/,
shape: Union[Tuple[int], ivy.Array, ivy.NativeArray, ivy.Container],
dim: Optional[Union[int, ivy.Container]] = 0,
*,
out: Optional[Union[ivy.Array, ivy.Container]] = None,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
) -> ivy.Container:
"""ivy.Container instance method variant of ivy.unflatten. This method
simply wraps the function, and so the docstring for ivy.unflatten also
applies to this method with minimal changes.
Parameters
----------
self
input array
shape
array indices. Must have an integer data type.
dim
axis over which to unflatten. If `axis` is negative,
the function must determine the axis along which to select values
by counting from the last dimension.
By default, the flattened input array is used.
out
optional output array, for writing the result to. It must
have a shape that the inputs broadcast to.
key_chains
The key-chains to apply or not apply the method to.
Default is ``None``.
to_apply
If True, the method will be applied to key_chains,
otherwise key_chains will be skipped. Default is ``True``.
prune_unapplied
Whether to prune key_chains for which the function was
not applied. Default is ``False``.
map_sequences
Whether to also map method to sequences (lists, tuples).
Default is ``False``.
Returns
-------
ret
an array having the same data type as `x`.
The output array must have the same rank
(i.e., number of dimensions) as `x` and
must have the same shape as `x`,
except for the axis specified by `dim`
which is replaced with a tuple specified in `shape`.
Examples
--------
With 'ivy.Container' input:
>>> x = ivy.Container(a = ivy.array([[True, False, False, True],
... [False, True, False, True]]),
... b = ivy.array([[1.2, 2.3, 3.4, 4.5],
... [5.6, 6.7, 7.8, 8.9]]),
... c = ivy.array([[1, 2, 3, 4],
... [5, 6, 7, 8]]))
>>> dim = 1
>>> shape = (2, 2)
>>> y = x.unflatten(shape=shape, dim=dim)
>>> print(y)
{
a: ivy.array([[[True, False], [False, True]],
[[False, True], [False, True]]]),
b: ivy.array([[[1.2, 2.3], [3.4, 4.5]], [[5.6, 6.7], [7.8, 8.9]]]),
c: ivy.array([[[1, 2], [3, 4]], [[5, 6], [7, 8]]])
}
"""
return self._static_unflatten(
self,
shape=shape,
dim=dim,
out=out,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
)
def concat_from_sequence(
self: ivy.Container,
/,
input_sequence: Union[
Tuple[Union[ivy.Array, ivy.NativeArray, ivy.Container]],
List[Union[ivy.Array, ivy.NativeArray, ivy.Container]],
],
*,
new_axis: Union[int, ivy.Container] = 0,
axis: Union[int, ivy.Container] = 0,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
out: Optional[ivy.Container] = None,
) -> ivy.Container:
"""ivy.Container instance method variant of ivy.stack. This method simply
wraps the function, and so the docstring for ivy.stack also applies to this
method with minimal changes.
Parameters
----------
self
Container with leaves to join with leaves of other arrays/containers.
Each array leave must have the same shape.
input_sequence
Container with other leaves to join.
Each array leave must have the same shape.
new_axis
Insert and concatenate on a new axis or not,
default 0 means do not insert new axis.
new_axis = 0: concatenate
new_axis = 1: stack
axis
axis along which the array leaves will be concatenated. More details can be found in
the docstring for ivy.stack.
key_chains
The key-chains to apply or not apply the method to. Default is ``None``.
to_apply
If True, the method will be applied to key_chains, otherwise key_chains
will be skipped. Default is ``True``.
prune_unapplied
Whether to prune key_chains for which the function was not applied.
Default is ``False``.
map_sequences
Whether to also map method to sequences (lists, tuples).
Default is ``False``.
out
optional output array, for writing the result to. It must have a shape
that the inputs broadcast to.
Returns
-------
ret
an output container with the results.
Examples
--------
>>> x = ivy.Container(a=ivy.array([[0, 1], [2,3]]), b=ivy.array([[4, 5]]))
>>> y = ivy.Container(a=ivy.array([[3, 2], [1,0]]), b=ivy.array([[1, 0]]))
>>> z = ivy.Container.static_concat_from_sequence([x,y],axis=1)
>>> print(z)
{
'a': ivy.array([[[0, 1],
[3, 2]],
[[2, 3],
[1, 0]]]),
'b': ivy.array([[[4, 5],
[1, 0]]])
}
"""
new_input_sequence = (
input_sequence.cont_copy()
if ivy.is_ivy_container(input_sequence)
else input_sequence.copy()
)
new_input_sequence.insert(0, self.cont_copy())
return self.concat_from_sequence(
new_input_sequence,
new_axis=new_axis,
axis=axis,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
out=out,
)
| ivy/ivy/data_classes/container/experimental/manipulation.py/0 | {
"file_path": "ivy/ivy/data_classes/container/experimental/manipulation.py",
"repo_id": "ivy",
"token_count": 76207
} | 12 |
# global
from typing import Optional, Union, List, Dict
# local
import ivy
from ivy.data_classes.container.base import ContainerBase
# noinspection PyMissingConstructor
class _ContainerWithRandom(ContainerBase):
@staticmethod
def _static_random_uniform(
*,
low: Union[float, ivy.Container, ivy.Array, ivy.NativeArray] = 0.0,
high: Union[float, ivy.Container, ivy.Array, ivy.NativeArray] = 1.0,
shape: Optional[Union[ivy.Shape, ivy.NativeShape, ivy.Container]] = None,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
device: Optional[Union[ivy.Device, ivy.NativeDevice, ivy.Container]] = None,
dtype: Optional[Union[ivy.Dtype, ivy.NativeDtype, ivy.Container]] = None,
seed: Optional[Union[int, ivy.Container]] = None,
out: Optional[ivy.Container] = None,
) -> ivy.Container:
"""ivy.Container static method variant of ivy.random_uniform. This
method simply wraps the function, and so the docstring for
ivy.random_uniform also applies to this method with minimal changes.
Parameters
----------
low
Lower boundary of the output interval. All values generated will be
greater than or equal to ``low``. If array, must have same shape as
``high``.
high
Upper boundary of the output interval. All the values generated will be
less than ``high``. If array, must have same shape as ``low``.
shape
If the given shape is, e.g ``(m, n, k)``, then ``m * n * k`` samples
are drawn. Can only be specified when ``low`` and ``high`` are numeric
values, else exception will be raised.
Default is ``None``, where a single value is returned.
key_chains
The key-chains to apply or not apply the method to. Default is ``None``.
to_apply
If True, the method will be applied to key_chains, otherwise key_chains
will be skipped. Default is ``True``.
prune_unapplied
Whether to prune key_chains for which the function was not applied.
Default is ``False``.
map_sequences
Whether to also map method to sequences (lists, tuples).
Default is ``False``.
device
device on which to create the array 'cuda:0', 'cuda:1', 'cpu' etc.
(Default value = None).
dtype
output array data type. If ``dtype`` is ``None``, the output array data
type will be the default floating-point data type. Default ``None``
seed
A python integer. Used to create a random seed distribution
out
optional output array, for writing the result to. It must have a shape
that the inputs broadcast to.
Returns
-------
ret
Drawn samples from the parameterized uniform distribution.
Examples
--------
With :class:`ivy.Container` inputs:
>>> x = ivy.Container(a=ivy.array([[9.8,7.6],[6.5,2.3]]),
... b=ivy.array([[0.9,2.4],[7.6,5.4]]))
>>> y = ivy.Container(a=ivy.array([[10.9,32.4],[18.7,19.6]]),
... b=ivy.array([[4.3,5.6],[23.4,54.3]]))
>>> ivy.Container.static_random_uniform(low=x, high=y, device='cpu',
... dtype='float64')
{
a: ivy.array([[10.8, 23.7],
[17., 16.6]]),
b: ivy.array([[2.35, 3.69],
[17.4, 48.]])
}
With a mix of :class:`ivy.Array` and :class:`ivy.Container` inputs:
>>> x = ivy.array([-1.0,-9.0,-3.4])
>>> y = ivy.Container(a=ivy.array([0.6, 0.2, 0.3]),b=ivy.array([0.8, 0.2, 0.2]))
>>> ivy.Container.static_random_uniform(low=x, high=y)
{
a: ivy.array([0.481, -8.03, -2.74]),
b: ivy.array([0.0999, -7.38, -1.29])
}
"""
return ContainerBase.cont_multi_map_in_function(
"random_uniform",
low=low,
high=high,
shape=shape,
device=device,
dtype=dtype,
seed=seed,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
out=out,
)
def random_uniform(
self: ivy.Container,
/,
*,
high: Union[float, ivy.Container, ivy.Array, ivy.NativeArray] = 1.0,
shape: Optional[Union[ivy.Shape, ivy.NativeShape, ivy.Container]] = None,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
device: Optional[Union[ivy.Device, ivy.NativeDevice, ivy.Container]] = None,
dtype: Optional[Union[ivy.Dtype, ivy.NativeDtype, ivy.Container]] = None,
seed: Optional[Union[int, ivy.Container]] = None,
out: Optional[ivy.Container] = None,
) -> ivy.Container:
"""ivy.Container instance method variant of ivy.random_uniform. This
method simply wraps the function, and so the docstring for
ivy.random_uniform also applies to this method with minimal changes.
Parameters
----------
self
Lower boundary of the output interval. All values generated will be
greater than or equal to ``low``. If array, must have same shape as
``high``.
high
Upper boundary of the output interval. All the values generated will be
less than ``high``. If array, must have same shape as ``low``.
shape
If the given shape is, e.g ``(m, n, k)``, then ``m * n * k`` samples
are drawn. Can only be specified when ``low`` and ``high`` are numeric
values, else exception will be raised.
Default is ``None``, where a single value is returned.
key_chains
The key-chains to apply or not apply the method to. Default is ``None``.
to_apply
If True, the method will be applied to key_chains, otherwise key_chains
will be skipped. Default is ``True``.
prune_unapplied
Whether to prune key_chains for which the function was not applied.
Default is ``False``.
map_sequences
Whether to also map method to sequences (lists, tuples).
Default is ``False``.
device
device on which to create the array 'cuda:0', 'cuda:1', 'cpu' etc.
(Default value = None).
dtype
output array data type. If ``dtype`` is ``None``, the output array data
type will be the default floating-point data type. Default ``None``
seed
A python integer. Used to create a random seed distribution
out
optional output array, for writing the result to. It must have a shape
that the inputs broadcast to.
Returns
-------
ret
Drawn samples from the parameterized uniform distribution.
Examples
--------
>>> x = ivy.Container(a=ivy.array([7.5,6.7,0.9]), b=ivy.array([8.7,9.8,4.5]))
>>> x.random_uniform(high=17.4)
{
a: ivy.array([11.2, 10.5, 13.1]),
b: ivy.array([11.2, 11.9, 6.01])
}
>>> x.random_uniform(high=10.2, device='cpu')
{
a: ivy.array([8.55, 10.1, 4.08]),
b: ivy.array([9.45, 9.9, 8.6])
}
>>> x.random_uniform(high=14.2, dtype='float16')
{
a: ivy.array([12.4, 11.7, 7.25]),
b: ivy.array([11.8, 11.8, 4.96])
}
>>> x.random_uniform(high=10.8, device='cpu', dtype='float64')
{
a: ivy.array([8.86, 9.24, 6.43]),
b: ivy.array([8.95, 10.1, 8.51])
}
>>> z = ivy.Container(a=ivy.zeros((3,)), b=ivy.ones((3,)))
>>> x.random_uniform(high=11.2, device='cpu', dtype='float64', out=z)
{
a: ivy.array([9.6, 8.24, 3.67]),
b: ivy.array([9.29, 11.2, 9.84])
}
>>> y = ivy.Container(a=10.4, b=17.4)
>>> x.random_uniform(high=y)
{
a: ivy.array([8.24, 9.22, 1.52]),
b: ivy.array([16.5, 13.4, 17.3])
}
>>> x.random_uniform(high=y, device='cpu')
{
a: ivy.array([8.55, 10.1, 4.08]),
b: ivy.array([9.45, 9.9, 8.6])
}
>>> x.random_uniform(high=y, dtype='float16')
{
a: ivy.array([12.4, 11.7, 7.25]),
b: ivy.array([11.8, 11.8, 4.96])
}
>>> x.random_uniform(high=y, device='cpu', dtype='float64')
{
a: ivy.array([8.86, 9.24, 6.43]),
b: ivy.array([8.95, 10.1, 8.51])
}
>>> z = ivy.Container(a=ivy.zeros((3,)), b=ivy.ones((3,)))
>>> x.random_uniform(high=y, device='cpu', dtype='float64', out=z)
{
a: ivy.array([9.6, 8.24, 3.67]),
b: ivy.array([9.29, 11.2, 9.84])
}
>>> x = ivy.Container(a=ivy.array([[9.8,7.6],[6.5,2.3]]),
... b=ivy.array([[0.9,2.4],[7.6,5.4]]))
>>> y = ivy.Container(a=ivy.array([[10.9,32.4],[18.7,19.6]]),
... b=ivy.array([[4.3,5.6],[23.4,54.3]]))
>>> x.random_uniform(high=y)
{
a: ivy.array([[10.4, 17.],
[9.81, 10.9]]),
b: ivy.array([[3.6, 4.31],
[18.8, 54.2]])
}
>>> x.random_uniform(high=y, device='cpu')
{
a: ivy.array([[10.1, 7.93],
[7.98, 6.]]),
b: ivy.array([[4.28, 4.65],
[13.9, 28.9]])
}
>>> x.random_uniform(high=y, dtype='float16')
{
a: ivy.array([[10.6, 28.],
[16.4, 4.92]]),
b: ivy.array([[3.61, 4.82],
[12.6, 10.2]])
}
>>> x.random_uniform(high=y, device='cpu', dtype='float64')
{
a: ivy.array([[10.7, 28.4],
[9.29, 17.4]]),
b: ivy.array([[1.88, 4.94],
[17., 9.68]])
}
>>> z = ivy.Container(a=ivy.zeros((2,2)), b=ivy.ones((2,2)))
>>> x.random_uniform(high=y, device='cpu', dtype='float64', out=z)
{
a: ivy.array([[10.4, 29.8],
[12.1, 3.9]]),
b: ivy.array([[3.79, 5.4],
[16.2, 31.7]])
}
"""
return self._static_random_uniform(
low=self,
high=high,
shape=shape,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
device=device,
dtype=dtype,
seed=seed,
out=out,
)
@staticmethod
def _static_random_normal(
*,
mean: Union[float, ivy.Container, ivy.Array, ivy.NativeArray] = 0.0,
std: Union[float, ivy.Container, ivy.Array, ivy.NativeArray] = 1.0,
shape: Optional[Union[ivy.Shape, ivy.NativeShape, ivy.Container]] = None,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
device: Optional[Union[ivy.Device, ivy.NativeDevice, ivy.Container]] = None,
dtype: Optional[Union[ivy.Dtype, ivy.NativeDtype, ivy.Container]] = None,
seed: Optional[Union[int, ivy.Container]] = None,
out: Optional[ivy.Container] = None,
) -> ivy.Container:
"""ivy.Container static method variant of ivy.random_normal. This
method simply wraps the function, and so the docstring for
ivy.random_normal also applies to this method with minimal changes.
Parameters
----------
mean
The mean of the normal distribution to sample from. Default is ``0.0``.
std
The standard deviation of the normal distribution to sample from.
Must be non-negative. Default is ``1.0``.
shape
If the given shape is, e.g ``(m, n, k)``, then ``m * n * k`` samples
are drawn. Can only be specified when ``mean`` and ``std`` are numeric
values, else exception will be raised.
Default is ``None``, where a single value is returned.
key_chains
The key-chains to apply or not apply the method to. Default is ``None``.
to_apply
If True, the method will be applied to key_chains, otherwise key_chains
will be skipped. Default is ``True``.
prune_unapplied
Whether to prune key_chains for which the function was not applied.
Default is ``False``.
map_sequences
Whether to also map method to sequences (lists, tuples).
Default is ``False``.
device
device on which to create the array 'cuda:0', 'cuda:1', 'cpu' etc.
(Default value = None).
dtype
output array data type. If ``dtype`` is ``None``, the output array data
type will be the default floating-point data type. Default ``None``
seed
A python integer. Used to create a random seed distribution
out
optional output array, for writing the result to. It must have a shape
that the inputs broadcast to.
Returns
-------
ret
Drawn samples from the parameterized normal distribution.
Examples
--------
With :class:`ivy.Container` inputs:
>>> x = ivy.Container(a=ivy.array([[9.8,7.6],[6.5,2.3]]),
... b=ivy.array([[0.9,2.4],[7.6,5.4]]))
>>> y = ivy.Container(a=ivy.array([[10.9,32.4],[18.7,19.6]]),
... b=ivy.array([[4.3,5.6],[23.4,54.3]]))
>>> ivy.Container.static_random_normal(mean=x, std=y, device='cpu',
... dtype='float64')
{
a: ivy.array([[-4.11, 0.651],
[19.3, -30.4]]),
b: ivy.array([[1.15, 3.39],
[-9.35, -13.9]])
}
With a mix of :class:`ivy.Array` and :class:`ivy.Container` inputs:
>>> x = ivy.array([-1.0,-9.0,-3.4])
>>> y = ivy.Container(a=ivy.array([0.6, 0.2, 0.3]),b=ivy.array([0.8, 0.2, 0.2]))
>>> ivy.Container.static_random_normal(mean=x, std=y)
{
a: ivy.array([-0.651, -9.25, -3.54]),
b: ivy.array([0.464, -8.51, -3.75])
}
"""
return ContainerBase.cont_multi_map_in_function(
"random_normal",
mean=mean,
std=std,
shape=shape,
device=device,
dtype=dtype,
seed=seed,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
out=out,
)
def random_normal(
self: ivy.Container,
/,
*,
std: Union[float, ivy.Container, ivy.Array, ivy.NativeArray] = 1.0,
shape: Optional[Union[ivy.Shape, ivy.NativeShape, ivy.Container]] = None,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
device: Optional[Union[ivy.Device, ivy.NativeDevice, ivy.Container]] = None,
dtype: Optional[Union[ivy.Dtype, ivy.NativeDtype, ivy.Container]] = None,
seed: Optional[Union[int, ivy.Container]] = None,
out: Optional[ivy.Container] = None,
) -> ivy.Container:
"""ivy.Container instance method variant of ivy.random_normal. This
method simply wraps the function, and so the docstring for
ivy.random_normal also applies to this method with minimal changes.
Parameters
----------
self
The mean of the normal distribution to sample from. Default is ``0.0``.
std
The standard deviation of the normal distribution to sample from.
Must be non-negative. Default is ``1.0``.
shape
If the given shape is, e.g ``(m, n, k)``, then ``m * n * k`` samples
are drawn. Can only be specified when ``mean`` and ``std`` are numeric
values, else exception will be raised.
Default is ``None``, where a single value is returned.
key_chains
The key-chains to apply or not apply the method to. Default is ``None``.
to_apply
If True, the method will be applied to key_chains, otherwise key_chains
will be skipped. Default is ``True``.
prune_unapplied
Whether to prune key_chains for which the function was not applied.
Default is ``False``.
map_sequences
Whether to also map method to sequences (lists, tuples).
Default is ``False``.
device
device on which to create the array 'cuda:0', 'cuda:1', 'cpu' etc.
(Default value = None).
dtype
output array data type. If ``dtype`` is ``None``, the output array data
type will be the default floating-point data type. Default ``None``
seed
A python integer. Used to create a random seed distribution
out
optional output array, for writing the result to. It must have a shape
that the inputs broadcast to.
Returns
-------
ret
Drawn samples from the parameterized normal distribution.
Examples
--------
>>> x = ivy.Container(a=ivy.array([7.5,6.7,0.9]),
... b=ivy.array([8.7,9.8,4.5]))
>>> x.random_normal(std=17.4)
{
a: ivy.array([11.9, -22.9, -24.8]),
b: ivy.array([44.3, -21.6, 2.03])
}
>>> x.random_normal(std=10.2, device='cpu')
{
a: ivy.array([7.82, 6.21, -0.431]),
b: ivy.array([13.8, 9.9, 7.64])
}
>>> x.random_normal(std=14.2, dtype='float16')
{
a: ivy.array([-18.3, -3.42, 9.55]),
b: ivy.array([-1.31, 7.68, -6.93])
}
>>> x.random_normal(std=10.8, device='cpu', dtype='float64')
{
a: ivy.array([13.4, -3.14, 10.7]),
b: ivy.array([11.7, 4.85, 5.83])
}
>>> z = ivy.Container(a=ivy.zeros((3,)), b=ivy.ones((3,)))
>>> x.random_normal(std=11.2, device='cpu', dtype='float64', out=z)
{
a: ivy.array([-6.84, 0.274, 14.2]),
b: ivy.array([29.1, 7.19, 3.])
}
>>> y = ivy.Container(a=10.4, b=17.4)
>>> x.random_normal(std=y)
{
a: ivy.array([-9.5, 8.54, -9.13]),
b: ivy.array([-24.5, 18.9, 11.])
}
>>> x.random_normal(std=y, device='cpu')
{
a: ivy.array([8.47, 8.23, 8.69]),
b: ivy.array([10.7, 16.2, 16.1])
}
>>> x.random_normal(std=y, dtype='float16')
{
a: ivy.array([8.22, -15.9, 10.4]),
b: ivy.array([19.9, 11.5, -2.15])
}
>>> x.random_normal(std=y, device='cpu', dtype='float64')
{
a: ivy.array([19.6, -4.08, 6.09]),
b: ivy.array([-23.9, 6.86, 17.6])
}
>>> z = ivy.Container(a=ivy.zeros((3,)), b=ivy.ones((3,)))
>>> x.random_normal(std=y, device='cpu', dtype='float64', out=z)
{
a: ivy.array([14.7, 8.99, 8.46]),
b: ivy.array([22.9, -5.97, -1.28])
}
>>> x = ivy.Container(a=ivy.array([[9.8,7.6],[6.5,2.3]]),
... b=ivy.array([[0.9,2.4],[7.6,5.4]]))
>>> y = ivy.Container(a=ivy.array([[10.9,32.4],[18.7,19.6]]),
... b=ivy.array([[4.3,5.6],[23.4,54.3]]))
>>> x.random_normal(std=y)
{
a: ivy.array([[10.6, 7.89],
[9.39, 19.4]]),
b: ivy.array([[3.76, 4.68],
[17.7, 24.]])
}
>>> x.random_normal(std=y, device='cpu')
{
a: ivy.array([[30.9, 24.6],
[29.9, -25.3]]),
b: ivy.array([[8.02, 1.92],
[-5.34, -54.1]])
}
>>> x.random_normal(std=y, dtype='float16')
{
a: ivy.array([[7.82, -35.],
[11.7, 0.696]]),
b: ivy.array([[-4.07, -2.91],
[19.2, 46.8]])
}
>>> x.random_normal(std=y, device='cpu', dtype='float64')
{
a: ivy.array([[25.4, 28.3],
[19.6, -9.83]]),
b: ivy.array([[2.95, 2.48],
[-30.8, -40.1]])
}
>>> z = ivy.Container(a=ivy.zeros((2,2)), b=ivy.ones((2,2)))
>>> x.random_normal(std=y, device='cpu', dtype='float64', out=z)
{
a: ivy.array([[2.8, -45.6],
[-10.4, 0.65]]),
b: ivy.array([[3.8, 1.43],
[23., 29.4]])
}
"""
return self._static_random_normal(
mean=self,
std=std,
shape=shape,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
device=device,
dtype=dtype,
seed=seed,
out=out,
)
@staticmethod
def _static_multinomial(
population_size: Union[int, ivy.Container],
num_samples: Union[int, ivy.Container],
/,
*,
batch_size: Union[int, ivy.Container] = 1,
probs: Optional[Union[ivy.Array, ivy.NativeArray, ivy.Container]] = None,
replace: Union[bool, ivy.Container] = True,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
device: Optional[Union[ivy.Device, ivy.NativeDevice, ivy.Container]] = None,
seed: Optional[Union[int, ivy.Container]] = None,
out: Optional[ivy.Container] = None,
) -> ivy.Container:
"""ivy.Container static method variant of ivy.multinomial. This method
simply wraps the function, and so the docstring for ivy.multinomial
also applies to this method with minimal changes.
Parameters
----------
population_size
The size of the population from which to draw samples.
num_samples
Number of independent samples to draw from the population.
batch_size
Number of tensors to generate. Default is 1.
probs
The unnormalized probabilities for all elements in population,
default is uniform *[batch_shape, population_size]*
replace
Whether to replace samples once they've been drawn. Default is ``True``.
key_chains
The key-chains to apply or not apply the method to. Default is ``None``.
to_apply
If True, the method will be applied to key_chains, otherwise key_chains
will be skipped. Default is ``True``.
prune_unapplied
Whether to prune key_chains for which the function was not applied.
Default is ``False``.
map_sequences
Whether to also map method to sequences (lists, tuples).
Default is ``False``.
device
device on which to create the array 'cuda:0', 'cuda:1', 'cpu' etc.
(Default value = None)
seed
A python integer. Used to create a random seed distribution
out
optional output array, for writing the result to. It must have a shape
that the inputs broadcast to.
Returns
-------
ret
Drawn samples from the parameterized normal distribution.
"""
return ContainerBase.cont_multi_map_in_function(
"multinomial",
population_size,
num_samples,
batch_size=batch_size,
probs=probs,
replace=replace,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
device=device,
seed=seed,
out=out,
)
def multinomial(
self: ivy.Container,
population_size: Union[int, ivy.Container],
num_samples: Union[int, ivy.Container],
/,
*,
batch_size: Union[int, ivy.Container] = 1,
replace: Union[bool, ivy.Container] = True,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
device: Optional[Union[ivy.Device, ivy.NativeDevice, ivy.Container]] = None,
seed: Optional[Union[int, ivy.Container]] = None,
out: Optional[ivy.Container] = None,
) -> ivy.Container:
"""ivy.Container instance method variant of ivy.multinomial. This
method simply wraps the function, and so the docstring for
ivy.multinomial also applies to this method with minimal changes.
Parameters
----------
self
The unnormalized probabilities for all elements in population,
default is uniform *[batch_shape, population_size]*
population_size
The size of the population from which to draw samples.
num_samples
Number of independent samples to draw from the population.
batch_size
Number of tensors to generate. Default is 1.
replace
Whether to replace samples once they've been drawn. Default is ``True``.
key_chains
The key-chains to apply or not apply the method to. Default is ``None``.
to_apply
If True, the method will be applied to key_chains, otherwise key_chains
will be skipped. Default is ``True``.
prune_unapplied
Whether to prune key_chains for which the function was not applied.
Default is ``False``.
map_sequences
Whether to also map method to sequences (lists, tuples).
Default is ``False``.
device
device on which to create the array 'cuda:0', 'cuda:1', 'cpu' etc.
(Default value = None)
seed
A python integer. Used to create a random seed distribution
out
optional output array, for writing the result to. It must have a shape
that the inputs broadcast to.
Returns
-------
ret
Drawn samples from the parameterized normal distribution.
"""
return self._static_multinomial(
population_size,
num_samples,
batch_size=batch_size,
probs=self,
replace=replace,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
device=device,
seed=seed,
out=out,
)
@staticmethod
def _static_randint(
low: Union[int, ivy.Container, ivy.Array, ivy.NativeArray],
high: Union[int, ivy.Container, ivy.Array, ivy.NativeArray],
/,
*,
shape: Optional[Union[ivy.Shape, ivy.NativeShape, ivy.Container]] = None,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
device: Optional[Union[ivy.Device, ivy.NativeDevice, ivy.Container]] = None,
dtype: Optional[Union[ivy.Dtype, ivy.NativeDtype, ivy.Container]] = None,
seed: Optional[Union[int, ivy.Container]] = None,
out: Optional[ivy.Container] = None,
) -> ivy.Container:
"""ivy.Container static method variant of ivy.randint. This method
simply wraps the function, and so the docstring for ivy.randint also
applies to this method with minimal changes.
Parameters
----------
low
Lowest integer that can be drawn from the distribution.
high
One above the highest integer that can be drawn from the distribution.
shape
If the given shape is, e.g ``(m, n, k)``, then ``m * n * k`` samples
are drawn. Can only be specified when ``low`` and ``high`` are numeric
values, else exception will be raised.
Default is ``None``, where a single value is returned.
key_chains
The key-chains to apply or not apply the method to. Default is ``None``.
to_apply
If True, the method will be applied to key_chains, otherwise key_chains
will be skipped. Default is ``True``.
prune_unapplied
Whether to prune key_chains for which the function was not applied.
Default is ``False``.
map_sequences
Whether to also map method to sequences (lists, tuples).
Default is ``False``.
device
device on which to create the array 'cuda:0', 'cuda:1', 'cpu' etc.
(Default value = None).
dtype
output array data type. If ``dtype`` is ``None``, the output array data
type will be the default integer data type. Default ``None``
seed
A python integer. Used to create a random seed distribution
out
optional output array, for writing the result to. It must have a shape
that the inputs broadcast to.
Returns
-------
ret
Returns an array with the given shape filled with integers from
the uniform distribution in the “half-open” interval [low, high)
Examples
--------
With :class:`ivy.Container` inputs:
>>> x = ivy.Container(a=ivy.array([[9,7],[6,2]]),
... b=ivy.array([[0,2],[10,6]]))
>>> y = ivy.Container(a=ivy.array([[10,32],[18,19]]),
... b=ivy.array([[44,5],[23,54]]))
>>> ivy.Container.static_randint(x, y, device='cpu', dtype='int32')
{
a: ivy.array([[9, 27],
[16, 17]]),
b: ivy.array([[13, 3],
[16, 19]])
}
With a mix of :class:`ivy.Array` and :class:`ivy.Container` inputs:
>>> x = ivy.array([-1,-9,3])
>>> y = ivy.Container(a=ivy.array([4,7,9]),b=ivy.array([14,17,34]))
>>> ivy.Container.static_randint(x, y)
{
a: ivy.array([1, 6, 5]),
b: ivy.array([0, 10, 17])
}
"""
return ContainerBase.cont_multi_map_in_function(
"randint",
low,
high,
shape=shape,
device=device,
dtype=dtype,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
seed=seed,
out=out,
)
def randint(
self: ivy.Container,
high: Union[int, ivy.Container, ivy.Array, ivy.NativeArray],
/,
*,
shape: Optional[Union[ivy.Shape, ivy.NativeShape, ivy.Container]] = None,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
device: Optional[Union[ivy.Device, ivy.NativeDevice, ivy.Container]] = None,
dtype: Optional[Union[ivy.Dtype, ivy.NativeDtype, ivy.Container]] = None,
seed: Optional[Union[int, ivy.Container]] = None,
out: Optional[ivy.Container] = None,
) -> ivy.Container:
"""ivy.Container instance method variant of ivy.randint. This method
simply wraps the function, and so the docstring for ivy.randint also
applies to this method with minimal changes.
Parameters
----------
self
Lowest integer that can be drawn from the distribution.
high
One above the highest integer that can be drawn from the distribution.
shape
If the given shape is, e.g ``(m, n, k)``, then ``m * n * k`` samples
are drawn. Can only be specified when ``low`` and ``high`` are numeric
values, else exception will be raised.
Default is ``None``, where a single value is returned.
key_chains
The key-chains to apply or not apply the method to. Default is ``None``.
to_apply
If True, the method will be applied to key_chains, otherwise key_chains
will be skipped. Default is ``True``.
prune_unapplied
Whether to prune key_chains for which the function was not applied.
Default is ``False``.
map_sequences
Whether to also map method to sequences (lists, tuples).
Default is ``False``.
device
device on which to create the array 'cuda:0', 'cuda:1', 'cpu' etc.
(Default value = None).
dtype
output array data type. If ``dtype`` is ``None``, the output array data
type will be the default integer data type. Default ``None``
seed
A python integer. Used to create a random seed distribution
out
optional output array, for writing the result to. It must have a shape
that the inputs broadcast to.
Returns
-------
ret
Returns an array with the given shape filled with integers from
the uniform distribution in the “half-open” interval [low, high)
Examples
--------
>>> x = ivy.Container(a=ivy.array([7,6,0]),
... b=ivy.array([8,9,4]))
>>> x.randint(30)
{
a: ivy.array([23, 15, 20]),
b: ivy.array([28, 22, 18])
}
>>> x.randint(10, device='cpu')
{
a: ivy.array([9, 7, 7]),
b: ivy.array([8, 9, 9])
}
>>> x.randint(102, dtype='int8')
{
a: ivy.array([9, 8, 2]),
b: ivy.array([62, 62, 60])
}
>>> x.randint(54, device='cpu', dtype='int64')
{
a: ivy.array([30, 29, 26]),
b: ivy.array([24, 24, 21])
}
>>> z = ivy.Container(a=ivy.zeros((3,)), b=ivy.ones((3,)))
>>> x.randint(21, device='cpu', dtype='int8', out=z)
{
a: ivy.array([7, 6, 0]),
b: ivy.array([8, 9, 4])
}
>>> y = ivy.Container(a=54, b=17)
>>> x.randint(y)
{
a: ivy.array([7, 6, 0]),
b: ivy.array([8, 9, 4])
}
>>> x.randint(y, device='cpu')
{
a: ivy.array([7, 6, 0]),
b: ivy.array([8, 9, 4])
}
>>> x.randint(y, dtype='int64')
{
a: ivy.array([7, 6, 0]),
b: ivy.array([8, 9, 4])
}
>>> x.randint(y, device='cpu', dtype='int32')
{
a: ivy.array([7, 6, 0]),
b: ivy.array([8, 9, 4])
}
>>> z = ivy.Container(a=ivy.zeros((3,)), b=ivy.ones((3,)))
>>> x.randint(y, device='cpu', dtype='int16', out=z)
{
a: ivy.array([7, 6, 0]),
b: ivy.array([8, 9, 4])
}
>>> x = ivy.Container(a=ivy.array([[9,7],[6,2]]),
... b=ivy.array([[0,2],[10,6]]))
>>> y = ivy.Container(a=ivy.array([[10,32],[18,19]]),
... b=ivy.array([[44,5],[23,54]]))
>>> x.randint(y)
{
a: ivy.array([[9, 7],
[6, 2]]),
b: ivy.array([[0, 2],
[10, 6]])
}
>>> x.randint(y, device='cpu')
{
a: ivy.array([[9, 7],
[6, 2]]),
b: ivy.array([[0, 2],
[10, 6]])
}
>>> x.randint(y, dtype='int64')
{
a: ivy.array([[9, 7],
[6, 2]]),
b: ivy.array([[0, 2],
[10, 6]])
}
>>> x.randint(y, device='cpu', dtype='int32')
{
a: ivy.array([[9, 7],
[6, 2]]),
b: ivy.array([[0, 2],
[10, 6]])
}
>>> z = ivy.Container(a=ivy.zeros((2,2)), b=ivy.ones((2,2)))
>>> x.randint(y, device='cpu', dtype='int16', out=z)
{
a: ivy.array([[9, 7],
[6, 2]]),
b: ivy.array([[0, 2],
[10, 6]])
}
"""
return self._static_randint(
self,
high,
shape=shape,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
device=device,
dtype=dtype,
seed=seed,
out=out,
)
@staticmethod
def _static_shuffle(
x: Union[int, ivy.Container, ivy.Array, ivy.NativeArray],
axis: Optional[Union[int, ivy.Container]] = 0,
/,
*,
seed: Optional[Union[int, ivy.Container]] = None,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
out: Optional[ivy.Container] = None,
) -> ivy.Container:
"""ivy.Container static method variant of ivy.shuffle. This method
simply wraps the function, and so the docstring for ivy.shuffle also
applies to this method with minimal changes.
Parameters
----------
x
Input array or container. Should have a numeric data type.
axis
The axis which input array or container is shuffled along. Default is 0.
seed
A python integer. Used to create a random seed distribution
key_chains
The key-chains to apply or not apply the method to. Default is ``None``.
to_apply
If True, the method will be applied to key_chains, otherwise key_chains
will be skipped. Default is ``True``.
prune_unapplied
Whether to prune key_chains for which the function was not applied.
Default is ``False``.
map_sequences
Whether to also map method to sequences (lists, tuples).
Default is ``False``.
out
optional output container, for writing the result to. It must have a shape
that the inputs broadcast to.
Returns
-------
ret
A container object, shuffled along the first dimension.
Examples
--------
>>> x = ivy.Container(a=ivy.array([7, 6, 0]),
... b=ivy.array([8, 9, 4]))
>>> ivy.Container.static_shuffle(x)
{
a: ivy.array([7, 0, 6]),
b: ivy.array([8, 4, 9])
}
"""
return ContainerBase.cont_multi_map_in_function(
"shuffle",
x,
axis,
seed=seed,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
out=out,
)
def shuffle(
self: ivy.Container,
axis: Optional[Union[int, ivy.Container]] = 0,
/,
*,
seed: Optional[Union[int, ivy.Container]] = None,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
out: Optional[ivy.Container] = None,
) -> ivy.Container:
"""ivy.Container instance method variant of ivy.shuffle. This method
simply wraps the function, and so the docstring for ivy.shuffle also
applies to this method with minimal changes.
Parameters
----------
self
Input container. Should have a numeric data type.
axis
The axis which input container is shuffled along. Default is 0.
seed
A python integer. Used to create a random seed distribution
key_chains
The key-chains to apply or not apply the method to. Default is ``None``.
to_apply
If True, the method will be applied to key_chains, otherwise key_chains
will be skipped. Default is ``True``.
prune_unapplied
Whether to prune key_chains for which the function was not applied.
Default is ``False``.
map_sequences
Whether to also map method to sequences (lists, tuples).
Default is ``False``.
out
optional output container, for writing the result to. It must have a shape
that the inputs broadcast to.
Returns
-------
ret
A container object, shuffled along the first dimension.
Examples
--------
>>> x = ivy.Container(a=ivy.array([5, 2, 9]),
... b=ivy.array([7, 1, 6]))
>>> y = ivy.Container.shuffle(x)
>>> print(y)
{
a: ivy.array([9, 5, 2]),
b: ivy.array([6, 7, 1])
}
"""
return self._static_shuffle(
self,
axis,
seed=seed,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
out=out,
)
| ivy/ivy/data_classes/container/random.py/0 | {
"file_path": "ivy/ivy/data_classes/container/random.py",
"repo_id": "ivy",
"token_count": 22064
} | 13 |
# global
from typing import Optional, Union
# local
import ivy
from .base import NestedArrayBase
class NestedArrayElementwise(NestedArrayBase):
@staticmethod
def static_add(
x1: Union[NestedArrayBase, ivy.Array, ivy.NestedArray],
x2: Union[NestedArrayBase, ivy.Array, ivy.NestedArray],
/,
*,
alpha: Optional[Union[int, float]] = None,
out: Optional[ivy.Array] = None,
) -> NestedArrayBase:
pass
# return self._elementwise_op(other, ivy.add)
| ivy/ivy/data_classes/nested_array/elementwise.py/0 | {
"file_path": "ivy/ivy/data_classes/nested_array/elementwise.py",
"repo_id": "ivy",
"token_count": 222
} | 14 |
use super::{
handle_status, FromPrimitive, Literal, NativeType, PrimitiveType, Shape, XlaComputation, XlaOp,
};
use crate::{c_lib, Error, Result};
use std::rc::Rc;
use pyo3::prelude::*;
/// A builder is used to keep track of a computation graph while it's being built.
pub(super) struct XlaBuilderInternal(c_lib::xla_builder);
#[derive(Clone)]
#[pyclass(unsendable)]
pub struct XlaBuilder(Rc<XlaBuilderInternal>);
impl XlaBuilder {
/// Create a new builder with the associated name, the name is only used for debugging
/// purposes.
pub fn new(name: &str) -> XlaBuilder {
let name = std::ffi::CString::new(name).unwrap();
let xla_builder = unsafe { c_lib::xla_builder_create(name.as_ptr()) };
XlaBuilder(Rc::new(XlaBuilderInternal(xla_builder)))
}
fn ptr(&self) -> c_lib::xla_builder {
self.0 .0
}
/// Build a computation from the specified root node. This can only be called once.
pub fn build(&self, op: &XlaOp) -> Result<XlaComputation> {
let mut result: c_lib::xla_computation = std::ptr::null_mut();
let status = unsafe { c_lib::build(self.ptr(), op.op, &mut result) };
handle_status(status)?;
Ok(XlaComputation(result))
}
/// This returns `Ok(())` if the graph creation has not generated any error so far. Otherwise
/// the first error is returned.
pub fn first_error(&self) -> Result<()> {
let status = unsafe { c_lib::first_error(self.ptr()) };
handle_status(status)?;
Ok(())
}
/// This returns `Ok(())` if the graph creation has not generated any error so far. Otherwise
/// the current status is returned.
pub fn get_current_status(&self) -> Result<()> {
let status = unsafe { c_lib::get_current_status(self.ptr()) };
handle_status(status)?;
Ok(())
}
/// Create a node with a constant value defined by the specified literal.
pub fn constant_literal(&self, literal: &Literal) -> Result<XlaOp> {
let op = unsafe { c_lib::constant_literal(self.ptr(), literal.0) };
self.wrap(op)
}
/// Create a node with a constant scalar value using the type of the element that is passed as
/// argument.
pub fn constant_r0<T: NativeType>(&self, f: T) -> Result<XlaOp> {
let op = unsafe { T::constant_r0(self.ptr(), f) };
self.wrap(op)
}
/// A shorter notation for `constant_r0`.
pub fn c0<T: NativeType>(&self, f: T) -> Result<XlaOp> {
self.constant_r0(f)
}
pub fn wrap(&self, op: c_lib::xla_op) -> Result<XlaOp> {
self.get_current_status()?;
Ok(XlaOp { op, builder: self.clone() })
}
/// Create an input node with the specified type and dimensions. A literal has to be passed for
/// each of the parameter in the graph when calling the `execute` function, the parameter
/// number are specified as incrementing values from 0 and represent the index of the
/// associated literal in the slice passed to `execute`.
pub fn parameter(
&self,
parameter_number: i64,
ty: super::ElementType,
dims: &[i64],
name: &str,
) -> Result<XlaOp> {
let name = std::ffi::CString::new(name).unwrap();
let op = unsafe {
c_lib::parameter(
self.ptr(),
parameter_number,
ty.primitive_type() as i32,
dims.len() as i32,
dims.as_ptr(),
name.as_ptr(),
)
};
self.wrap(op)
}
/// Read a single value from the implicit streaming interface of the device.
pub fn infeed(&self, ty: PrimitiveType, dims: &[i64], config: &str) -> Result<XlaOp> {
let config = std::ffi::CString::new(config).unwrap();
let op = unsafe {
c_lib::infeed(self.ptr(), ty as i32, dims.len() as i32, dims.as_ptr(), config.as_ptr())
};
self.wrap(op)
}
pub fn parameter_s(&self, parameter_number: i64, shape: &Shape, name: &str) -> Result<XlaOp> {
let c_shape = shape.c_shape()?;
let name = std::ffi::CString::new(name).unwrap();
let op = unsafe {
c_lib::parameter_s(self.ptr(), parameter_number, c_shape.as_ptr(), name.as_ptr())
};
drop(c_shape);
self.wrap(op)
}
pub fn constant_r1c<T: NativeType>(&self, f: T, len: usize) -> Result<XlaOp> {
let op = unsafe { T::constant_r1c(self.ptr(), f, len) };
self.wrap(op)
}
/// A one dimension constant node based on some slice stored on the host.
pub fn constant_r1<T: NativeType>(&self, f: &[T]) -> Result<XlaOp> {
let op = unsafe { T::constant_r1(self.ptr(), f.as_ptr(), f.len()) };
self.wrap(op)
}
/// Shorthand function for `constant_r1`.
pub fn c1<T: NativeType>(&self, f: &[T]) -> Result<XlaOp> {
self.constant_r1(f)
}
/// A scalar node with the zero value for the associated type.
pub fn zero(&self, ty: super::ElementType) -> Result<XlaOp> {
let op = unsafe { c_lib::op_zero(self.ptr(), ty.primitive_type() as i32) };
self.wrap(op)
}
/// A scalar node with the one value for the associated type.
pub fn one(&self, ty: super::ElementType) -> Result<XlaOp> {
let op = unsafe { c_lib::op_one(self.ptr(), ty.primitive_type() as i32) };
self.wrap(op)
}
/// A scalar node with the minimum value for the associated type.
pub fn min_value(&self, ty: super::ElementType) -> Result<XlaOp> {
let op = unsafe { c_lib::op_min_value(self.ptr(), ty.primitive_type() as i32) };
self.wrap(op)
}
/// A scalar node with the maximum value for the associated type.
pub fn max_value(&self, ty: super::ElementType) -> Result<XlaOp> {
let op = unsafe { c_lib::op_max_value(self.ptr(), ty.primitive_type() as i32) };
self.wrap(op)
}
/// A constant node with the specified shape that holds increasing values starting from 0 along
/// the iota dimension.
pub fn iota(&self, ty: super::ElementType, dims: &[i64], iota_dimension: i64) -> Result<XlaOp> {
let op = unsafe {
c_lib::op_iota(
self.ptr(),
ty.primitive_type() as i32,
dims.len(),
dims.as_ptr(),
iota_dimension,
)
};
self.wrap(op)
}
/// A constant node for a unidimensional array of increasing values starting from 0.
pub fn iota1(&self, ty: super::ElementType, size: usize) -> Result<XlaOp> {
let op = unsafe { c_lib::op_iota1(self.ptr(), ty.primitive_type() as i32, size) };
self.wrap(op)
}
pub fn call(&self, computation: &XlaComputation, operands: &[XlaOp]) -> Result<XlaOp> {
let operands: Vec<_> = operands.iter().map(|a| a.op).collect();
let op = unsafe {
c_lib::op_call(self.ptr(), computation.0, operands.len(), operands.as_ptr())
};
self.wrap(op)
}
pub fn map(
&self,
operands: &[XlaOp],
computation: &XlaComputation,
dims: &[i64],
static_operands: &[XlaOp]
) -> Result<XlaOp> {
let operands: Vec<_> = operands.iter().map(|a| a.op).collect();
let static_operands: Vec<_> = static_operands.iter().map(|a| a.op).collect();
let op = unsafe {
c_lib::op_map(
self.ptr(),
operands.len(),
operands.as_ptr(),
computation.0,
dims.len(),
dims.as_ptr(),
static_operands.len(),
static_operands.as_ptr(),
)
};
self.wrap(op)
}
/// An error node, using the 'internal error' error type.
pub fn internal_error(&self, msg: &str) -> XlaOp {
let msg = std::ffi::CString::new(msg).unwrap();
let op = unsafe { c_lib::op_internal_error(self.ptr(), msg.as_ptr()) };
XlaOp { op, builder: self.clone() }
}
/// An error node, using the 'unknown error' error type.
pub fn unknown_error(&self, msg: &str) -> XlaOp {
let msg = std::ffi::CString::new(msg).unwrap();
let op = unsafe { c_lib::op_unknown_error(self.ptr(), msg.as_ptr()) };
XlaOp { op, builder: self.clone() }
}
/// An error node, using the 'invalid argument error' error type.
pub fn invalid_argument_error(&self, msg: &str) -> XlaOp {
let msg = std::ffi::CString::new(msg).unwrap();
let op = unsafe { c_lib::op_invalid_argument_error(self.ptr(), msg.as_ptr()) };
XlaOp { op, builder: self.clone() }
}
/// Wrap a potential error in an error node. If the argument is `Ok(op)` then `op` is passed
/// back as the result.
pub fn wrap_error(&self, op: Result<XlaOp>) -> XlaOp {
match op {
Ok(op) => op,
Err(err) => self.internal_error(&err.to_string()),
}
}
/// The shape associated with this op.
pub fn get_shape(&self, op: &XlaOp) -> Result<Shape> {
let mut out: c_lib::shape = std::ptr::null_mut();
let status = unsafe { c_lib::get_shape(self.ptr(), op.op, &mut out) };
handle_status(status)?;
let c_shape = super::shape::CShape::from_ptr(out);
c_shape.shape()
}
/// The dimension sizes associated with this op.
pub fn get_dims(&self, op: &XlaOp) -> Result<Vec<usize>> {
let rank = self.get_dimensions_size(op)?;
let mut dims = vec![0; rank];
let status = unsafe { c_lib::get_dimensions(self.ptr(), op.op, dims.as_mut_ptr()) };
handle_status(status)?;
Ok(dims)
}
/// The element type associated with this op.
pub fn get_primitive_type(&self, op: &XlaOp) -> Result<super::PrimitiveType> {
let mut ty = 0i32;
let status = unsafe { c_lib::get_element_type(self.ptr(), op.op, &mut ty) };
handle_status(status)?;
FromPrimitive::from_i32(ty).ok_or(Error::UnexpectedElementType(ty))
}
/// The number of dimensions (a.k.a the rank) associated with this op.
pub fn get_dimensions_size(&self, op: &XlaOp) -> Result<usize> {
let mut dsize = 0i32;
let status = unsafe { c_lib::get_dimensions_size(self.ptr(), op.op, &mut dsize) };
handle_status(status)?;
Ok(dsize as usize)
}
/// Build a tuple from multiple operands.
pub fn tuple<B: std::borrow::Borrow<XlaOp>>(&self, args: &[B]) -> Result<XlaOp> {
let args: Vec<_> = args.iter().map(|a| a.borrow().op).collect();
let op = unsafe { c_lib::op_tuple(self.ptr(), args.as_ptr(), args.len()) };
self.wrap(op)
}
}
impl Drop for XlaBuilderInternal {
fn drop(&mut self) {
unsafe { c_lib::xla_builder_free(self.0) }
}
}
| ivy/ivy/engines/XLA/rust_api/src/wrappers/xla_builder.rs/0 | {
"file_path": "ivy/ivy/engines/XLA/rust_api/src/wrappers/xla_builder.rs",
"repo_id": "ivy",
"token_count": 4757
} | 15 |
# global
import jax
backend_version = {"version": jax.__version__}
# local sub-modules
from .activations import *
from .converters import *
from .creation import *
from .data_type import *
from .device import *
from .elementwise import *
from .general import *
from .gradients import *
from .layers import *
from .linear_algebra import *
from .losses import *
from .manipulation import *
from .norms import *
from .random import *
from .searching import *
from .set import *
from .sorting import *
from .sparse_array import *
from .statistical import *
from .utility import *
| ivy/ivy/functional/backends/jax/experimental/__init__.py/0 | {
"file_path": "ivy/ivy/functional/backends/jax/experimental/__init__.py",
"repo_id": "ivy",
"token_count": 173
} | 16 |
# global
from typing import Callable
import mxnet as mx
# local
import ivy
from ivy.functional.ivy.gradients import (
_flatten_containers,
_rebuild_flattened_containers,
)
from ivy.utils.exceptions import IvyNotImplementedException
def bind_custom_gradient_function(func, custom_grad_fn):
raise IvyNotImplementedException()
def vjp(func: Callable, *primals):
flattened_primals, ret_idxs = _flatten_containers(primals)
def grad_fn(*x_in):
return _flatten_containers(
ivy.to_native(
func(
*ivy.to_ivy(
_rebuild_flattened_containers(x_in, ret_idxs), nested=True
)
),
nested=True,
include_derived=True,
)
)
with mx.autograd.record():
flat_primals_out, func_ret_idxs = grad_fn(
*ivy.to_native(flattened_primals, nested=True)
)
primals_out = _rebuild_flattened_containers(flat_primals_out, func_ret_idxs)
def vjpfun(x_in):
grads = mx.autograd.grad(
flat_primals_out,
ivy.to_native(flattened_primals, nested=True),
head_grads=ivy.to_native(_flatten_containers(x_in)[0], nested=True),
)
return _rebuild_flattened_containers(
ivy.to_ivy(grads, nested=True, include_derived=True), ret_idxs
)
return (ivy.to_ivy(primals_out, nested=True, include_derived=True), vjpfun)
def jvp(func: Callable, primals, tangents):
raise IvyNotImplementedException()
| ivy/ivy/functional/backends/mxnet/experimental/gradients.py/0 | {
"file_path": "ivy/ivy/functional/backends/mxnet/experimental/gradients.py",
"repo_id": "ivy",
"token_count": 770
} | 17 |
import mxnet as mx
from numbers import Number
from typing import Union, Tuple, Optional, List, Sequence
import ivy
from ivy.utils.exceptions import IvyNotImplementedException
def concat(
xs: Union[(Tuple[(None, ...)], List[None])],
/,
*,
axis: int = 0,
out: Optional[Union[(None, mx.ndarray.NDArray)]] = None,
) -> Union[(None, mx.ndarray.NDArray)]:
raise IvyNotImplementedException()
def expand_dims(
x: Union[(None, mx.ndarray.NDArray)],
/,
*,
copy: Optional[bool] = None,
axis: Union[(int, Sequence[int])] = 0,
out: Optional[Union[(None, mx.ndarray.NDArray)]] = None,
) -> Union[(None, mx.ndarray.NDArray)]:
raise IvyNotImplementedException()
def flip(
x: Union[(None, mx.ndarray.NDArray)],
/,
*,
copy: Optional[bool] = None,
axis: Optional[Union[(int, Sequence[int])]] = None,
out: Optional[Union[(None, mx.ndarray.NDArray)]] = None,
) -> Union[(None, mx.ndarray.NDArray)]:
raise IvyNotImplementedException()
def permute_dims(
x: Union[(None, mx.ndarray.NDArray)],
/,
axes: Tuple[(int, ...)],
*,
copy: Optional[bool] = None,
out: Optional[Union[(None, mx.ndarray.NDArray)]] = None,
) -> Union[(None, mx.ndarray.NDArray)]:
raise IvyNotImplementedException()
def reshape(
x: Union[(None, mx.ndarray.NDArray)],
/,
shape: Union[(ivy.NativeShape, Sequence[int])],
*,
copy: Optional[bool] = None,
order: str = "C",
allowzero: bool = True,
out: Optional[Union[(None, mx.ndarray.NDArray)]] = None,
) -> Union[(None, mx.ndarray.NDArray)]:
raise IvyNotImplementedException()
def roll(
x: Union[(None, mx.ndarray.NDArray)],
/,
shift: Union[(int, Sequence[int])],
*,
axis: Optional[Union[(int, Sequence[int])]] = None,
out: Optional[Union[(None, mx.ndarray.NDArray)]] = None,
) -> Union[(None, mx.ndarray.NDArray)]:
raise IvyNotImplementedException()
def squeeze(
x: Union[(None, mx.ndarray.NDArray)],
/,
*,
axis: Optional[Union[int, Sequence[int]]] = None,
copy: Optional[bool] = None,
out: Optional[Union[(None, mx.ndarray.NDArray)]] = None,
) -> Union[(None, mx.ndarray.NDArray)]:
return mx.nd.squeeze(x, axis=axis)
def stack(
arrays: Union[(Tuple[None], List[None])],
/,
*,
axis: int = 0,
out: Optional[Union[(None, mx.ndarray.NDArray)]] = None,
) -> Union[(None, mx.ndarray.NDArray)]:
raise IvyNotImplementedException()
def split(
x: Union[(None, mx.ndarray.NDArray)],
/,
*,
copy: Optional[bool] = None,
num_or_size_splits: Optional[Union[(int, Sequence[int])]] = None,
axis: int = 0,
with_remainder: bool = False,
) -> Union[(None, mx.ndarray.NDArray)]:
raise IvyNotImplementedException()
def repeat(
x: Union[(None, mx.ndarray.NDArray)],
/,
repeats: Union[(int, List[int])],
*,
axis: Optional[int] = None,
out: Optional[Union[(None, mx.ndarray.NDArray)]] = None,
) -> Union[(None, mx.ndarray.NDArray)]:
raise IvyNotImplementedException()
def tile(
x: Union[(None, mx.ndarray.NDArray)],
/,
repeats: Sequence[int],
*,
out: Optional[Union[(None, mx.ndarray.NDArray)]] = None,
) -> Union[(None, mx.ndarray.NDArray)]:
return mx.nd.tile(x, repeats)
def constant_pad(
x, /, pad_width, *, value=0, out: Optional[Union[(None, mx.ndarray.NDArray)]] = None
):
raise IvyNotImplementedException()
def zero_pad(
x, /, pad_width, *, out: Optional[Union[(None, mx.ndarray.NDArray)]] = None
):
raise IvyNotImplementedException()
def swapaxes(
x,
axis0,
axis1,
/,
*,
copy: Optional[bool] = None,
out: Optional[Union[(None, mx.ndarray.NDArray)]] = None,
):
raise IvyNotImplementedException()
def clip(
x: Union[(None, mx.ndarray.NDArray)],
x_min: Union[(Number, None, mx.ndarray.NDArray)],
x_max: Union[(Number, None, mx.ndarray.NDArray)],
/,
*,
out: Optional[Union[(None, mx.ndarray.NDArray)]] = None,
) -> Union[(None, mx.ndarray.NDArray)]:
return mx.nd.clip(x, x_min, x_max)
def unstack(
x: Union[(None, mx.ndarray.NDArray)],
/,
*,
copy: Optional[bool] = None,
axis: int = 0,
keepdims: bool = False,
) -> List[None]:
raise IvyNotImplementedException()
| ivy/ivy/functional/backends/mxnet/manipulation.py/0 | {
"file_path": "ivy/ivy/functional/backends/mxnet/manipulation.py",
"repo_id": "ivy",
"token_count": 1876
} | 18 |
# global
import numpy as np
backend_version = {"version": np.__version__}
# local sub-modules
from .activations import *
from .creation import *
from .data_type import *
from .device import *
from .elementwise import *
from .general import *
from .gradients import *
from .layers import *
from .linear_algebra import *
from .losses import *
from .manipulation import *
from .norms import *
from .random import *
from .searching import *
from .set import *
from .sorting import *
from .sparse_array import *
from .statistical import *
from .utility import *
| ivy/ivy/functional/backends/numpy/experimental/__init__.py/0 | {
"file_path": "ivy/ivy/functional/backends/numpy/experimental/__init__.py",
"repo_id": "ivy",
"token_count": 166
} | 19 |
# global
import numpy as np
from typing import Union, Optional, Sequence
# local
import ivy
from ivy.func_wrapper import with_unsupported_dtypes
from ivy.functional.backends.numpy.helpers import _scalar_output_to_0d_array
from . import backend_version
from ivy.utils.einsum_parser import legalise_einsum_expr
# Array API Standard #
# -------------------#
@_scalar_output_to_0d_array
def min(
x: np.ndarray,
/,
*,
axis: Optional[Union[int, Sequence[int]]] = None,
keepdims: bool = False,
initial: Optional[Union[int, float, complex]] = None,
where: Optional[np.ndarray] = None,
out: Optional[np.ndarray] = None,
) -> np.ndarray:
axis = tuple(axis) if isinstance(axis, list) else axis
if where is not None:
ret = np.amin(
a=x, axis=axis, keepdims=keepdims, initial=initial, where=where, out=out
)
else:
ret = np.amin(a=x, axis=axis, keepdims=keepdims, initial=initial, out=out)
return np.asarray(ret)
min.support_native_out = True
def max(
x: np.ndarray,
/,
*,
axis: Optional[Union[int, Sequence[int]]] = None,
keepdims: bool = False,
out: Optional[np.ndarray] = None,
) -> np.ndarray:
axis = tuple(axis) if isinstance(axis, list) else axis
return np.asarray(np.amax(a=x, axis=axis, keepdims=keepdims, out=out))
max.support_native_out = True
@_scalar_output_to_0d_array
def mean(
x: np.ndarray,
/,
*,
axis: Optional[Union[int, Sequence[int]]] = None,
keepdims: bool = False,
out: Optional[np.ndarray] = None,
) -> np.ndarray:
axis = tuple(axis) if isinstance(axis, list) else axis
return np.mean(x, axis=axis, keepdims=keepdims, dtype=x.dtype, out=out)
mean.support_native_out = True
def _infer_dtype(dtype: np.dtype):
default_dtype = ivy.infer_default_dtype(dtype)
if ivy.dtype_bits(dtype) < ivy.dtype_bits(default_dtype):
return default_dtype
return dtype
def prod(
x: np.ndarray,
/,
*,
axis: Optional[Union[int, Sequence[int]]] = None,
dtype: Optional[np.dtype] = None,
keepdims: bool = False,
out: Optional[np.ndarray] = None,
) -> np.ndarray:
dtype = ivy.as_native_dtype(dtype)
if dtype is None:
dtype = _infer_dtype(x.dtype)
axis = tuple(axis) if isinstance(axis, list) else axis
return np.asarray(np.prod(a=x, axis=axis, dtype=dtype, keepdims=keepdims, out=out))
prod.support_native_out = True
def std(
x: np.ndarray,
/,
*,
axis: Optional[Union[int, Sequence[int]]] = None,
correction: Union[int, float] = 0.0,
keepdims: bool = False,
out: Optional[np.ndarray] = None,
) -> np.ndarray:
axis = tuple(axis) if isinstance(axis, list) else axis
return np.asarray(np.std(x, axis=axis, ddof=correction, keepdims=keepdims, out=out))
std.support_native_out = True
def sum(
x: np.ndarray,
/,
*,
axis: Optional[Union[int, Sequence[int]]] = None,
dtype: Optional[np.dtype] = None,
keepdims: Optional[bool] = False,
out: Optional[np.ndarray] = None,
) -> np.ndarray:
if dtype is None and not ivy.is_bool_dtype(x):
dtype = x.dtype
axis = tuple(axis) if isinstance(axis, list) else axis
return np.asarray(
np.sum(
a=x,
axis=axis,
dtype=dtype,
keepdims=keepdims,
out=out,
)
)
sum.support_native_out = True
@_scalar_output_to_0d_array
def var(
x: np.ndarray,
/,
*,
axis: Optional[Union[int, Sequence[int]]] = None,
correction: Union[int, float] = 0.0,
keepdims: bool = False,
out: Optional[np.ndarray] = None,
) -> np.ndarray:
if axis is None:
axis = tuple(range(len(x.shape)))
axis = (axis,) if isinstance(axis, int) else tuple(axis)
if isinstance(correction, int):
return ivy.astype(
np.var(x, axis=axis, ddof=correction, keepdims=keepdims, out=out),
x.dtype,
copy=False,
)
if x.size == 0:
return np.asarray(float("nan"))
size = 1
for a in axis:
size *= x.shape[a]
return ivy.astype(
np.multiply(
np.var(x, axis=axis, keepdims=keepdims, out=out),
ivy.stable_divide(size, (size - correction)),
),
x.dtype,
copy=False,
)
var.support_native_out = True
# Extra #
# ------#
@with_unsupported_dtypes({"1.26.3 and below": ("bfloat16", "bool")}, backend_version)
def cumprod(
x: np.ndarray,
/,
*,
axis: int = 0,
exclusive: bool = False,
reverse: bool = False,
dtype: Optional[np.dtype] = None,
out: Optional[np.ndarray] = None,
) -> np.ndarray:
if dtype is None:
dtype = _infer_dtype(x.dtype)
if not (exclusive or reverse):
return np.cumprod(x, axis, dtype=dtype, out=out)
elif exclusive and reverse:
x = np.cumprod(np.flip(x, axis=axis), axis=axis, dtype=dtype)
x = np.swapaxes(x, axis, -1)
x = np.concatenate((np.ones_like(x[..., -1:]), x[..., :-1]), -1)
x = np.swapaxes(x, axis, -1)
return np.flip(x, axis=axis)
elif exclusive:
x = np.swapaxes(x, axis, -1)
x = np.concatenate((np.ones_like(x[..., -1:]), x[..., :-1]), -1)
x = np.cumprod(x, -1, dtype=dtype)
return np.swapaxes(x, axis, -1)
elif reverse:
x = np.cumprod(np.flip(x, axis=axis), axis=axis, dtype=dtype)
return np.flip(x, axis=axis)
cumprod.support_native_out = True
def cumsum(
x: np.ndarray,
axis: int = 0,
exclusive: bool = False,
reverse: bool = False,
*,
dtype: Optional[np.dtype] = None,
out: Optional[np.ndarray] = None,
) -> np.ndarray:
if dtype is None:
dtype = _infer_dtype(x.dtype)
if exclusive or reverse:
if exclusive and reverse:
x = np.cumsum(np.flip(x, axis=axis), axis=axis, dtype=dtype)
x = np.swapaxes(x, axis, -1)
x = np.concatenate((np.zeros_like(x[..., -1:]), x[..., :-1]), -1)
x = np.swapaxes(x, axis, -1)
res = np.flip(x, axis=axis)
elif exclusive:
x = np.swapaxes(x, axis, -1)
x = np.concatenate((np.zeros_like(x[..., -1:]), x[..., :-1]), -1)
x = np.cumsum(x, -1, dtype=dtype)
res = np.swapaxes(x, axis, -1)
elif reverse:
x = np.cumsum(np.flip(x, axis=axis), axis=axis, dtype=dtype)
res = np.flip(x, axis=axis)
return res
return np.cumsum(x, axis, dtype=dtype, out=out)
cumsum.support_native_out = True
@_scalar_output_to_0d_array
def einsum(
equation: str, *operands: np.ndarray, out: Optional[np.ndarray] = None
) -> np.ndarray:
equation = legalise_einsum_expr(*[equation, *operands])
return np.einsum(equation, *operands, out=out)
einsum.support_native_out = True
| ivy/ivy/functional/backends/numpy/statistical.py/0 | {
"file_path": "ivy/ivy/functional/backends/numpy/statistical.py",
"repo_id": "ivy",
"token_count": 3258
} | 20 |
"""Collection of Paddle general functions, wrapped to fit Ivy syntax and
signature."""
# global
from numbers import Number
from typing import Optional, Union, Sequence, Callable, List, Tuple
import paddle
import numpy as np
import multiprocessing as _multiprocessing
# local
import ivy
import ivy.functional.backends.paddle as paddle_backend
from ivy.func_wrapper import with_unsupported_device_and_dtypes
from ivy.functional.ivy.general import _broadcast_to
from ivy.utils.exceptions import _check_inplace_update_support
from . import backend_version
def is_native_array(x, /, *, exclusive=False):
if isinstance(x, paddle.Tensor):
if exclusive and not x.stop_gradient:
return False
return True
return False
def array_equal(x0: paddle.Tensor, x1: paddle.Tensor, /) -> bool:
return bool(paddle_backend.all(paddle_backend.equal(x0, x1)))
def container_types():
return []
def current_backend_str() -> str:
return "paddle"
def _check_query(query):
if isinstance(query, Sequence):
return not any(isinstance(item, (Sequence, paddle.Tensor)) for item in query)
else:
return True
def _squeeze_helper(query, x_ndim):
# as of paddle v2.5, paddle returns 1d tensors instead of scalars
return_scalar = (
(isinstance(query, Number) and x_ndim == 1)
or (
isinstance(query, tuple)
and all(isinstance(index, int) for index in query)
and len(query) == x_ndim
)
or (isinstance(query, paddle.Tensor) and query.ndim == x_ndim)
)
# checks if any slice has step > 1, this keeps all the dimensions
# in the paddle array which is not desirable
if not isinstance(query, Sequence):
query = [query]
slice_squeeze = list(
map(
lambda idx: isinstance(idx, slice)
and idx.step is not None
and idx.step != 1,
query,
)
)
if any(slice_squeeze):
squeeze_indices = tuple(
[
idx
for idx, val in enumerate(slice_squeeze)
if (val is False and query[idx] is not None)
]
)
elif return_scalar:
squeeze_indices = ()
else:
squeeze_indices = None
return squeeze_indices
@with_unsupported_device_and_dtypes(
{
"2.6.0 and below": {
"cpu": ("int8", "int16", "float16", "complex64", "complex128")
}
},
backend_version,
)
def get_item(
x: paddle.Tensor,
/,
query: Union[paddle.Tensor, Tuple],
*,
copy: Optional[bool] = None,
) -> paddle.Tensor:
if copy:
x = paddle.clone(x)
if (
isinstance(query, paddle.Tensor)
and query.dtype == paddle.bool
and query.ndim == 0
) or isinstance(query, bool):
# special case to handle scalar boolean indices
if query is True:
return x[None]
else:
return paddle.zeros(shape=[0] + x.shape, dtype=x.dtype)
if isinstance(query, paddle.Tensor) and query.dtype == paddle.bool:
# # masked queries x[bool_1,bool_2,...,bool_i]
return paddle.gather_nd(x, paddle.nonzero(query))
if isinstance(query, paddle.Tensor):
query = query.cast("int64")
squeeze_indices = _squeeze_helper(query, x.ndim)
# regular queries x[idx_1,idx_2,...,idx_i]
# array queries idx = Tensor(idx_1,idx_2,...,idx_i), x[idx]
ret = x.__getitem__(query)
return ret.squeeze(squeeze_indices) if squeeze_indices else ret
get_item.partial_mixed_handler = (
lambda x, query, **kwargs: _check_query(query) and 0 not in x.shape
)
def to_numpy(
x: Union[paddle.Tensor, List[paddle.Tensor]], /, *, copy: bool = True
) -> Union[np.ndarray, List[np.ndarray]]:
if isinstance(x, (float, int, bool)):
return x
elif isinstance(x, np.ndarray):
if copy:
return x.copy()
else:
return x
elif paddle.is_tensor(x):
dtype = ivy.as_ivy_dtype(x.dtype)
if dtype == "bfloat16":
x = x.astype("float32")
if copy:
return np.array(x).astype(dtype)
else:
return np.asarray(x).astype(dtype)
elif isinstance(x, list):
return [ivy.to_numpy(u) for u in x]
raise ivy.utils.exceptions.IvyException("Expected a Paddle Tensor.")
def to_scalar(x: paddle.Tensor, /) -> Number:
if isinstance(x, (Number, complex)):
return x
return x.item()
def to_list(x: paddle.Tensor, /) -> list:
return x.tolist()
def gather(
params: paddle.Tensor,
indices: paddle.Tensor,
/,
*,
axis: Optional[int] = -1,
batch_dims: Optional[int] = 0,
out: Optional[paddle.Tensor] = None,
) -> paddle.Tensor:
def _gather(params1):
if batch_dims == 0:
result = paddle.gather(
params1, paddle_backend.reshape(indices, shape=[-1]), axis=axis
)
# inputs are unstacked batch_dims times
# because paddle.gather does not support batch_dims
else:
params1_list = paddle_backend.unstack(params1, axis=0)
indices_list = paddle_backend.unstack(indices, axis=0)
for b in range(1, batch_dims):
params1_list = [
p2
for p1 in params1_list
for p2 in paddle_backend.unstack(p1, axis=0)
]
indices_list = [
i2
for i1 in indices_list
for i2 in paddle_backend.unstack(i1, axis=0)
]
result = []
for p, i in zip(params1_list, indices_list):
result.append(
paddle.gather(
p, paddle_backend.reshape(i, shape=[-1]), axis=axis - batch_dims
)
)
result = paddle_backend.concat(result, axis=0)
new_shape = (
params1.shape[:axis]
+ indices.shape[batch_dims:]
+ params1.shape[axis + 1 :]
)
return paddle_backend.reshape(result, shape=new_shape)
if axis is not None:
axis = axis % params.ndim
if batch_dims is not None:
batch_dims = batch_dims % params.ndim
ivy.utils.assertions.check_gather_input_valid(params, indices, axis, batch_dims)
if params.dtype in [
paddle.int8,
paddle.int16,
paddle.float16,
paddle.complex64,
paddle.complex128,
paddle.bool,
]:
if paddle.is_complex(params):
return paddle.complex(_gather(params.real()), _gather(params.imag()))
return _gather(params.cast("float32")).cast(params.dtype)
return _gather(params)
@with_unsupported_device_and_dtypes(
{"2.6.0 and below": {"cpu": ("bfloat16", "float16")}},
backend_version,
)
def gather_nd(
params: paddle.Tensor,
indices: paddle.Tensor,
/,
*,
batch_dims: Optional[int] = 0,
out: Optional[paddle.Tensor] = None,
) -> paddle.Tensor:
"""gather_nd implementation with batch support."""
ivy.utils.assertions.check_gather_nd_input_valid(params, indices, batch_dims)
if not isinstance(batch_dims, int):
raise TypeError(f"Argument `batch_dims` must be an int; got {batch_dims}")
if batch_dims < 0:
raise ValueError("gather_nd does not allow negative batch_dims.")
params_ndims = params.ndim
indices_ndims = indices.ndim
if indices_ndims is not None and batch_dims >= indices_ndims:
raise ValueError(
f"Argument `batch_dims` = {batch_dims} must be "
f"less than rank(`indices`) = {indices_ndims}"
)
if params_ndims is not None and batch_dims >= params_ndims:
raise ValueError(
f"Argument `batch_dims` = {batch_dims} must be "
f"less than rank(`params`) = {params_ndims}"
)
expand = batch_dims == 0
if expand:
# Normally gather_nd will be called when batch_dims == 0.
# But if this function is called with batch_dims = 0, e.g. for testing
# purposes, this adds a dummy batch dimension to make batch_dims = 1.
params = paddle_backend.expand_dims(params, axis=0)
indices = paddle_backend.expand_dims(indices, axis=0)
batch_dims = 1
if indices.dtype not in [paddle.int32, paddle.int64]:
indices = indices.cast(paddle.int32)
params_shape = paddle.to_tensor(params.shape)
indices_shape = indices.shape
batch_shape = params_shape[:batch_dims]
batch_size = paddle.prod(batch_shape, [0]).numpy().tolist()
if isinstance(batch_size, int):
batch_size = [batch_size]
index_internal_ndims = indices.ndim - batch_dims - 1
indices_internal_shape = indices_shape[batch_dims:-1]
# Assuming a 'params' with shape [b1, ..., bM, g1, ..., gN] and an 'indices'
# with shape [b1, ..., bM, i1, ..., iK, C], where C <= N, we need to modify
# 'indices' s.t. it has shape [i1, ..., iK, D], where D <= M + N and slices
# to the entire 'params' tensor.
# Assuming we have a batch of shape [B1, B2], we use meshgrid to create a
# grid of size B1 x B2.
batch_dim_list = paddle_backend.unstack(batch_shape, axis=0)
dim_ranges = [
paddle.arange(0, x.item(), 1, dtype=indices.dtype) for x in batch_dim_list
]
if dim_ranges:
if len(dim_ranges) > 1:
mesh_list = paddle_backend.meshgrid(*dim_ranges, indexing="ij")
else:
mesh_list = dim_ranges
else:
mesh_list = []
# Then we flatten and stack the tensors to form a (B1.B2) by 2 matrix.
flat_list = [paddle_backend.reshape(x, shape=(-1,)) for x in mesh_list]
stacked_list = (
paddle_backend.stack(flat_list, axis=0) if flat_list else paddle.to_tensor([])
)
index_grid = paddle_backend.permute_dims(
stacked_list, axes=[axis for axis in range(stacked_list.ndim)][::-1]
)
# We need to concatenate these batch coordinates with the internal indices.
# concat -> index_grid [B1.B2, 2] with indices [i1, ..., iK, C]
# So we reshape them both to [(B1.B2), i1, ..., iK, *]
index_grid_shape = index_grid.shape
index_grid = paddle_backend.reshape(
index_grid,
index_grid_shape[:1]
+ [
1,
]
* index_internal_ndims
+ index_grid_shape[1:],
)
tile_shape = (
[
1,
]
+ indices_internal_shape
+ [
1,
]
)
index_grid = paddle_backend.tile(index_grid, repeats=paddle.to_tensor(tile_shape))
# index_grid now has shape [(B1.B2), i1, ..., iK, 2]
flat_shape = batch_size + indices_shape[batch_dims:]
flat_indices = paddle_backend.reshape(indices, shape=flat_shape)
# flat_indices now has shape [(B1.B2), i1, ..., iK, C]
indices = paddle_backend.concat((index_grid, flat_indices), axis=-1)
# indices has shape [(B1.B2), i1, ..., iK, 2+C]
if params.dtype in [
paddle.int8,
paddle.float16,
paddle.complex64,
paddle.complex128,
]:
if paddle.is_complex(params):
out = paddle.complex(
paddle.gather_nd(params.real(), indices),
paddle.gather_nd(params.imag(), indices),
)
else:
out = paddle.gather_nd(params.cast("float32"), indices).cast(params.dtype)
else:
out = paddle.gather_nd(params, indices)
# out has shape [(B1.B2), i1, ..., iK, N-C]. Now we reshape batch to
# its original form.
out_shape = out.shape
out = paddle_backend.reshape(out, shape=batch_shape.tolist() + out_shape[1:])
if expand:
out = paddle_backend.squeeze(out, axis=0)
return out
def get_num_dims(
x: paddle.Tensor, /, *, as_array: bool = False
) -> Union[paddle.Tensor, int]:
return paddle.to_tensor(x.ndim).squeeze() if as_array else x.ndim
def inplace_arrays_supported():
# there are some operations that support inplace updates
# but it's not supported in all functions
return False
def inplace_decrement(
x: Union[ivy.Array, paddle.Tensor],
val: Union[ivy.Array, paddle.Tensor],
) -> ivy.Array:
(x_native, val_native), _ = ivy.args_to_native(x, val)
if ivy.is_ivy_array(x):
target = x.data
else:
target = x
return paddle.assign(paddle_backend.subtract(x_native, val_native), target)
def inplace_increment(
x: Union[ivy.Array, paddle.Tensor],
val: Union[ivy.Array, paddle.Tensor],
) -> ivy.Array:
(x_native, val_native), _ = ivy.args_to_native(x, val)
if ivy.is_ivy_array(x):
target = x.data
else:
target = x
return paddle.assign(paddle_backend.add(x_native, val_native), target)
def inplace_update(
x: Union[ivy.Array, paddle.Tensor],
val: Union[ivy.Array, paddle.Tensor],
/,
*,
ensure_in_backend: bool = False,
keep_input_dtype: bool = False,
) -> ivy.Array:
_check_inplace_update_support(x, ensure_in_backend)
if ivy.is_array(x) and ivy.is_array(val):
(x_native, val_native), _ = ivy.args_to_native(x, val)
if val_native.shape == x_native.shape:
if x_native.dtype != val_native.dtype:
x_native = x_native.astype(val_native.dtype)
paddle.assign(val_native, x_native)
else:
x_native = val_native
if ivy.is_native_array(x):
return x_native
if ivy.is_ivy_array(x):
x.data = x_native
else:
x = ivy.Array(x_native)
return x
else:
return val
def inplace_variables_supported():
return False
def multiprocessing(context=None):
return (
_multiprocessing if context is None else _multiprocessing.get_context(context)
)
def scatter_flat(
indices: paddle.Tensor,
updates: paddle.Tensor,
/,
*,
size: Optional[int] = None,
reduction: str = "sum",
out: Optional[paddle.Tensor] = None,
):
if indices.dtype not in [paddle.int32, paddle.int64]:
indices = indices.cast("int64")
if ivy.exists(size) and ivy.exists(out):
ivy.utils.assertions.check_equal(out.ndim, 1, as_array=False)
ivy.utils.assertions.check_equal(out.shape[0], size, as_array=False)
return paddle_backend.scatter_nd(
indices.unsqueeze(-1), updates, shape=[size], reduction=reduction, out=out
)
def scatter_nd(
indices: paddle.Tensor,
updates: paddle.Tensor,
/,
shape: Optional[Union[ivy.NativeShape, Sequence[int]]] = None,
*,
reduction: str = "sum",
out: Optional[paddle.Tensor] = None,
) -> paddle.Tensor:
updates = paddle.to_tensor(
updates,
dtype=(
ivy.promote_types(out.dtype, updates.dtype)
if ivy.exists(out)
else ivy.default_dtype(item=updates)
),
)
if indices.dtype not in [paddle.int32, paddle.int64]:
indices = indices.cast(paddle.int32)
expected_shape = (
list(indices.shape[:-1]) + list(out.shape[indices.shape[-1] :])
if ivy.exists(out)
else list(indices.shape[:-1]) + list(shape[indices.shape[-1] :])
)
updates = _broadcast_to(updates, expected_shape).data
# remove duplicate indices
# necessary because we will be using scatter_nd_add
if indices.ndim > 1 and reduction != "sum":
indices_shape = indices.shape
indices = paddle.reshape(indices, (-1, indices.shape[-1]))
num_indices = indices.shape[0]
# use flip to keep the last occurrence of each value
indices, unique_idxs = ivy.unique_all(
ivy.flip(indices, axis=[0]), axis=0, by_value=True
)[:2]
indices = indices.data
if len(unique_idxs) < num_indices:
updates = paddle.reshape(
updates, (-1, *updates.shape[len(indices_shape) - 1 :])
)
updates = ivy.gather(ivy.flip(updates, axis=[0]), unique_idxs, axis=0).data
expected_shape = (
list(indices.shape[:-1]) + list(out.shape[indices.shape[-1] :])
if ivy.exists(out)
else list(indices.shape[:-1]) + list(shape[indices.shape[-1] :])
)
else:
indices = paddle.reshape(indices, indices_shape)
# implementation
target_given = ivy.exists(out)
if target_given:
target = out.data
else:
shape = list(shape) if ivy.exists(shape) else out.shape
target = paddle.zeros(shape=shape).astype(updates.dtype)
if ivy.exists(shape) and target_given:
ivy.utils.assertions.check_equal(
ivy.Shape(target.shape), ivy.Shape(shape), as_array=False
)
if reduction not in ["sum", "replace", "min", "max"]:
raise ivy.utils.exceptions.IvyException(
f'reduction is {reduction}, but it must be one of "sum", "min", "max" or'
' "replace"'
)
if reduction == "min":
updates = ivy.minimum(ivy.gather_nd(target, indices), updates).data
elif reduction == "max":
updates = ivy.maximum(ivy.gather_nd(target, indices), updates).data
elif reduction == "sum":
updates = ivy.add(ivy.gather_nd(target, indices), updates).data
if indices.ndim <= 1:
indices = ivy.expand_dims(indices, axis=0).data
updates = ivy.expand_dims(updates, axis=0).data
updates_ = _broadcast_to(ivy.gather_nd(target, indices), expected_shape).data
target_dtype = target.dtype
if target_dtype in [
paddle.complex64,
paddle.complex128,
]:
result_real = paddle.scatter_nd_add(
paddle.scatter_nd_add(target.real(), indices, -updates_.real()),
indices,
updates.real(),
)
result_imag = paddle.scatter_nd_add(
paddle.scatter_nd_add(target.imag(), indices, -updates_.imag()),
indices,
updates.imag(),
)
ret = paddle.complex(result_real, result_imag)
elif target_dtype in [
paddle.int8,
paddle.int16,
paddle.uint8,
paddle.float16,
paddle.bool,
]:
target, updates, updates_ = (
target.cast("float32"),
updates.cast("float32"),
updates_.cast("float32"),
)
ret = paddle.scatter_nd_add(
paddle.scatter_nd_add(target, indices, -updates_),
indices,
updates,
).cast(target_dtype)
else:
ret = paddle.scatter_nd_add(
paddle.scatter_nd_add(target, indices, -updates_),
indices,
updates,
)
if ivy.exists(out):
return ivy.inplace_update(out, ret)
return ret
def shape(
x: paddle.Tensor, /, *, as_array: bool = False
) -> Union[ivy.Shape, ivy.Array]:
if as_array:
return ivy.array(x.shape, dtype=ivy.default_int_dtype())
else:
return ivy.Shape(x.shape)
def vmap(
func: Callable,
in_axes: Union[int, Sequence[int], Sequence[None]] = 0,
out_axes: int = 0,
) -> Callable:
@ivy.output_to_native_arrays
@ivy.inputs_to_native_arrays
def _vmap(*args, **kwargs):
# convert args tuple to list to allow mutability using moveaxis ahead.
args = list(args)
# if in_axis is a non-integer, its length should be equal to pos args.
if isinstance(in_axes, (list, tuple)):
ivy.utils.assertions.check_equal(
len(args),
len(in_axes),
message="""in_axes should have a length equivalent to the number
of positional arguments to the function being vectorized or it
should be an integer""",
as_array=False,
)
# checking axis_size consistency
axis_size = set()
if isinstance(in_axes, int):
for arg in args:
axis_size.add(arg.shape[in_axes])
elif isinstance(in_axes, (list, tuple)):
for arg, axis in zip(args, in_axes):
if axis is not None:
axis_size.add(arg.shape[axis])
if len(axis_size) > 1:
raise ivy.utils.exceptions.IvyException(
"""Inconsistent sizes. All mapped axes should have the same size"""
)
# Making sure not all in_axes are None
if isinstance(in_axes, (list, tuple)):
ivy.utils.assertions.check_any(
[ivy.exists(ax) for ax in in_axes],
message="At least one of the axes should be specified (not None)",
as_array=False,
)
else:
ivy.utils.assertions.check_exists(
in_axes, message="single value in_axes should not be None"
)
# Handling None in in_axes by broadcasting the axis_size
if isinstance(in_axes, (tuple, list)) and None in in_axes:
none_axis_index = []
for index, axis in enumerate(in_axes):
if axis is None:
none_axis_index.append(index)
for none_mapped_axis in none_axis_index:
args[none_mapped_axis] = paddle_backend.broadcast_to(
args[none_mapped_axis],
(tuple(axis_size) + args[none_mapped_axis].shape),
)
# set up the axis to be mapped
if isinstance(in_axes, (tuple, list)):
for i in range(len(in_axes)):
args[i] = paddle_backend.moveaxis(args[i], in_axes[i], 0)
elif isinstance(in_axes, int):
args[0] = paddle_backend.moveaxis(args[0], in_axes, 0)
# vectorisation - applying map_fn if only one arg provided as reduce requires
# two elements to begin with.
arr_results = []
for arrays in zip(*args):
arrays = [a if a.shape != [] else a.unsqueeze(0) for a in arrays]
arr_results.append(func(*arrays))
res = paddle_backend.concat(arr_results)
if out_axes:
res = paddle_backend.moveaxis(res, 0, out_axes)
return res
return _vmap
def isin(
elements: paddle.Tensor,
test_elements: paddle.Tensor,
/,
*,
assume_unique: Optional[bool] = False,
invert: Optional[bool] = False,
) -> paddle.Tensor:
input_shape = elements.shape
if elements.ndim == 0:
elements = paddle_backend.expand_dims(elements, axis=0)
if test_elements.ndim == 0:
test_elements = paddle_backend.expand_dims(test_elements, axis=0)
if not assume_unique:
test_elements = paddle_backend.unique_values(test_elements)
elements = elements.reshape([-1])
test_elements = test_elements.reshape([-1])
output = paddle_backend.any(
paddle_backend.equal(
paddle_backend.expand_dims(elements, axis=-1), test_elements
),
axis=-1,
)
return paddle_backend.logical_xor(
paddle_backend.reshape(output, input_shape), invert
)
def itemsize(x: paddle.Tensor) -> int:
return x.element_size()
| ivy/ivy/functional/backends/paddle/general.py/0 | {
"file_path": "ivy/ivy/functional/backends/paddle/general.py",
"repo_id": "ivy",
"token_count": 10852
} | 21 |
# global
import numpy as np
from numbers import Number
from typing import Union, List, Optional, Sequence, Tuple
import tensorflow as tf
# local
import ivy
from ivy.func_wrapper import with_unsupported_dtypes
from ivy.functional.ivy.creation import (
_asarray_to_native_arrays_and_back,
_asarray_infer_device,
_asarray_infer_dtype,
_asarray_handle_nestable,
NestedSequence,
SupportsBufferProtocol,
_asarray_inputs_to_native_shapes,
)
from . import backend_version
# Array API Standard #
# -------------------#
@with_unsupported_dtypes(
{
"2.15.0 and below": (
"float16",
"bfloat16",
"complex",
)
},
backend_version,
)
def arange(
start: float,
/,
stop: Optional[float] = None,
step: float = 1,
*,
dtype: Optional[tf.DType] = None,
device: Optional[str] = None,
out: Optional[Union[tf.Tensor, tf.Variable]] = None,
) -> Union[tf.Tensor, tf.Variable]:
if stop is None:
stop = start
start = 0
if (step > 0 and start > stop) or (step < 0 and start < stop):
if isinstance(stop, float):
stop = float(start)
else:
stop = start
if dtype is None:
if isinstance(start, int) and isinstance(stop, int) and isinstance(step, int):
return tf.cast(tf.range(start, stop, delta=step, dtype=tf.int64), tf.int32)
else:
return tf.range(start, stop, delta=step)
else:
dtype = ivy.as_native_dtype(ivy.default_dtype(dtype=dtype))
if dtype in [tf.int8, tf.uint8, tf.int16, tf.uint16, tf.uint32, tf.uint64]:
return tf.cast(tf.range(start, stop, delta=step, dtype=tf.int64), dtype)
else:
return tf.range(start, stop, delta=step, dtype=dtype)
@_asarray_to_native_arrays_and_back
@_asarray_infer_device
@_asarray_handle_nestable
@_asarray_inputs_to_native_shapes
@_asarray_infer_dtype
def asarray(
obj: Union[
tf.Tensor,
tf.Variable,
tf.TensorShape,
bool,
int,
float,
NestedSequence,
SupportsBufferProtocol,
np.ndarray,
],
/,
*,
copy: Optional[bool] = None,
dtype: Optional[tf.DType] = None,
device: Optional[str] = None,
out: Optional[Union[tf.Tensor, tf.Variable]] = None,
) -> Union[tf.Tensor, tf.Variable]:
# convert the input to a tensor using the appropriate function
with tf.device(device):
if tf.is_tensor(obj):
ret = tf.cast(obj, dtype) if obj.dtype != dtype else obj
elif (
dtype is not None
and dtype.is_integer
and np.issubdtype(np.array(obj).dtype, np.floating)
):
obj_np = np.array(obj)
ret = tf.convert_to_tensor(obj_np, dtype)
else:
ret = tf.convert_to_tensor(obj, dtype)
return tf.identity(ret) if (copy or ret.device != device) else ret
def empty(
shape: Union[ivy.NativeShape, Sequence[int]],
*,
dtype: tf.DType,
device: Optional[str] = None,
out: Optional[Union[tf.Tensor, tf.Variable]] = None,
) -> Union[tf.Tensor, tf.Variable]:
return tf.experimental.numpy.empty(shape, dtype)
def empty_like(
x: Union[tf.Tensor, tf.Variable],
/,
*,
dtype: tf.DType,
device: Optional[str] = None,
out: Optional[Union[tf.Tensor, tf.Variable]] = None,
) -> Union[tf.Tensor, tf.Variable]:
return tf.experimental.numpy.empty_like(x, dtype=dtype)
@with_unsupported_dtypes({"2.15.0 and below": ("uint16",)}, backend_version)
def eye(
n_rows: int,
n_cols: Optional[int] = None,
/,
*,
k: int = 0,
batch_shape: Optional[Union[int, Sequence[int]]] = None,
dtype: tf.DType,
device: Optional[str] = None,
out: Optional[Union[tf.Tensor, tf.Variable]] = None,
) -> Union[tf.Tensor, tf.Variable]:
if n_cols is None:
n_cols = n_rows
if batch_shape is None:
batch_shape = []
i = tf.eye(n_rows, n_cols, dtype=dtype)
reshape_dims = [1] * len(batch_shape) + [n_rows, n_cols]
tile_dims = list(batch_shape) + [1, 1]
# k=index of the diagonal. A positive value refers to an upper diagonal,
# a negative value to a lower diagonal, and 0 to the main diagonal.
# Default: ``0``.
# value of k ranges from -n_rows < k < n_cols
# k=0 refers to the main diagonal
if k == 0:
return tf.eye(n_rows, n_cols, batch_shape=batch_shape, dtype=dtype)
# when k is negative
elif -n_rows < k < 0:
mat = tf.concat(
[tf.zeros([-k, n_cols], dtype=dtype), i[: n_rows + k]],
0,
)
return tf.tile(tf.reshape(mat, reshape_dims), tile_dims)
elif 0 < k < n_cols:
mat = tf.concat(
[
tf.zeros([n_rows, k], dtype=dtype),
i[:, : n_cols - k],
],
1,
)
return tf.tile(tf.reshape(mat, reshape_dims), tile_dims)
else:
return tf.zeros(batch_shape + [n_rows, n_cols], dtype=dtype)
def to_dlpack(
x: Union[tf.Tensor, tf.Variable],
/,
*,
out: Optional[Union[tf.Tensor, tf.Variable]] = None,
):
if isinstance(x, tf.Variable):
x = x.read_value()
dlcapsule = tf.experimental.dlpack.to_dlpack(x)
return dlcapsule
# noinspection PyShadowingNames
def from_dlpack(
x,
/,
*,
out: Optional[Union[tf.Tensor, tf.Variable]] = None,
) -> Union[tf.Tensor, tf.Variable]:
if isinstance(x, tf.Variable):
x = x.read_value()
if hasattr(x, "__dlpack__"):
capsule = x.__dlpack__()
else:
capsule = x
return tf.experimental.dlpack.from_dlpack(capsule)
def full(
shape: Union[ivy.NativeShape, Sequence[int]],
fill_value: Union[int, float, bool],
*,
dtype: Optional[Union[ivy.Dtype, tf.DType]] = None,
device: Optional[str] = None,
out: Optional[Union[tf.Tensor, tf.Variable]] = None,
) -> Union[tf.Tensor, tf.Variable]:
dtype = ivy.default_dtype(dtype=dtype, item=fill_value, as_native=True)
return tf.experimental.numpy.full(shape, fill_value, dtype=dtype)
def full_like(
x: Union[tf.Tensor, tf.Variable],
/,
fill_value: Number,
*,
dtype: tf.DType,
device: Optional[str] = None,
out: Optional[Union[tf.Tensor, tf.Variable]] = None,
) -> Union[tf.Tensor, tf.Variable]:
return tf.experimental.numpy.full_like(x, fill_value, dtype=dtype)
def _slice_at_axis(sl, axis):
return (slice(None),) * axis + (sl,) + (...,)
def linspace(
start: Union[tf.Tensor, tf.Variable, float],
stop: Union[tf.Tensor, tf.Variable, float],
/,
num: int,
*,
axis: Optional[int] = None,
endpoint: bool = True,
dtype: tf.DType,
device: Optional[str] = None,
out: Optional[Union[tf.Tensor, tf.Variable]] = None,
):
if axis is None:
axis = -1
start = tf.cast(tf.constant(start), dtype=dtype)
stop = tf.cast(tf.constant(stop), dtype=dtype)
if not endpoint:
ans = tf.linspace(start, stop, num + 1, axis=axis)
if axis < 0:
axis += len(ans.shape)
ans = tf.convert_to_tensor(ans.numpy()[_slice_at_axis(slice(None, -1), axis)])
else:
ans = tf.linspace(start, stop, num, axis=axis)
if dtype.is_integer and ans.dtype.is_floating:
ans = tf.math.floor(ans)
return tf.cast(ans, dtype)
@with_unsupported_dtypes({"2.15.0 and below": ("bool",)}, backend_version)
def meshgrid(
*arrays: Union[tf.Tensor, tf.Variable],
sparse: bool = False,
indexing: str = "xy",
out: Optional[Union[tf.Tensor, tf.Variable]] = None,
) -> List[Union[tf.Tensor, tf.Variable]]:
if not sparse:
return tf.meshgrid(*arrays, indexing=indexing)
sd = (1,) * len(arrays)
res = [
tf.reshape(tf.convert_to_tensor(a), (sd[:i] + (-1,) + sd[i + 1 :]))
for i, a in enumerate(arrays)
]
if indexing == "xy" and len(arrays) > 1:
res[0] = tf.reshape(res[0], (1, -1) + sd[2:])
res[1] = tf.reshape(res[1], (-1, 1) + sd[2:])
return res
def ones(
shape: Union[ivy.NativeShape, Sequence[int]],
*,
dtype: tf.DType,
device: Optional[str] = None,
out: Optional[Union[tf.Tensor, tf.Variable]] = None,
) -> Union[tf.Tensor, tf.Variable]:
return tf.ones(shape, dtype)
def ones_like(
x: Union[tf.Tensor, tf.Variable],
/,
*,
dtype: tf.DType,
device: Optional[str] = None,
out: Optional[Union[tf.Tensor, tf.Variable]] = None,
) -> Union[tf.Tensor, tf.Variable]:
return tf.ones_like(x, dtype=dtype)
@with_unsupported_dtypes({"2.15.0 and below": ("bool",)}, backend_version)
def tril(
x: Union[tf.Tensor, tf.Variable],
/,
*,
k: int = 0,
out: Optional[Union[tf.Tensor, tf.Variable]] = None,
) -> Union[tf.Tensor, tf.Variable]:
# TODO: A way around tf.experimental.numpy.tril as it doesn't support bool
# and neither rank 1 tensors while np.tril does support both. Needs superset.
return tf.experimental.numpy.tril(x, k)
@with_unsupported_dtypes({"2.15.0 and below": ("bool",)}, backend_version)
def triu(
x: Union[tf.Tensor, tf.Variable],
/,
*,
k: int = 0,
out: Optional[Union[tf.Tensor, tf.Variable]] = None,
) -> Union[tf.Tensor, tf.Variable]:
return tf.experimental.numpy.triu(x, k)
def zeros(
shape: Union[ivy.NativeShape, Sequence[int]],
*,
dtype: tf.DType,
device: Optional[str] = None,
out: Optional[Union[tf.Tensor, tf.Variable]] = None,
) -> Union[tf.Tensor, tf.Variable]:
return tf.zeros(shape, dtype)
def zeros_like(
x: Union[tf.Tensor, tf.Variable],
/,
*,
dtype: tf.DType,
device: Optional[str] = None,
out: Optional[Union[tf.Tensor, tf.Variable]] = None,
) -> Union[tf.Tensor, tf.Variable]:
return tf.zeros_like(x, dtype=dtype)
# Extra #
# ------#
array = asarray
def copy_array(
x: Union[tf.Tensor, tf.Variable, tf.TensorArray],
*,
to_ivy_array: bool = True,
out: Optional[Union[tf.Tensor, tf.Variable]] = None,
) -> Union[tf.Tensor, tf.Variable]:
if isinstance(x, tf.TensorArray):
x_wrapped = x.stack()
y = tf.TensorArray(x.dtype, x.size())
x = y.unstack(ivy.copy_array(x_wrapped))
else:
x = tf.identity(x)
if to_ivy_array:
return ivy.to_ivy(x)
return x
def one_hot(
indices: Union[tf.Tensor, tf.Variable],
depth: int,
/,
*,
on_value: Optional[Number] = None,
off_value: Optional[Number] = None,
axis: Optional[int] = None,
dtype: Optional[tf.DType] = None,
device: Optional[str] = None,
out: Optional[Union[tf.Tensor, tf.Variable]] = None,
) -> Union[tf.Tensor, tf.Variable]:
return tf.one_hot(
indices, depth, on_value=on_value, off_value=off_value, axis=axis, dtype=dtype
)
@with_unsupported_dtypes({"2.15.0 and below": ("uint32", "uint64")}, backend_version)
def frombuffer(
buffer: bytes,
dtype: tf.DType = float,
count: int = -1,
offset: int = 0,
) -> Union[tf.Tensor, tf.Variable]:
if isinstance(buffer, bytearray):
buffer = bytes(buffer)
ret = tf.io.decode_raw(buffer, dtype)
dtype = tf.dtypes.as_dtype(dtype)
if offset > 0:
offset = int(offset / dtype.size)
if count > -1:
ret = ret[offset : offset + count]
else:
ret = ret[offset:]
return ret
def triu_indices(
n_rows: int,
n_cols: Optional[int] = None,
k: int = 0,
/,
*,
device: Optional[str] = None,
) -> Tuple[Union[tf.Tensor, tf.Variable]]:
n_cols = n_rows if n_cols is None else n_cols
if n_rows < 0 or n_cols < 0:
n_rows, n_cols = 0, 0
ret = [[], []]
for i in range(0, min(n_rows, n_cols - k), 1):
for j in range(max(0, k + i), n_cols, 1):
ret[0].append(i)
ret[1].append(j)
return tuple(tf.convert_to_tensor(ret, dtype=tf.int64))
| ivy/ivy/functional/backends/tensorflow/creation.py/0 | {
"file_path": "ivy/ivy/functional/backends/tensorflow/creation.py",
"repo_id": "ivy",
"token_count": 5492
} | 22 |
import tensorflow as tf
from typing import Literal, Union, Optional, Tuple
from ivy.func_wrapper import with_supported_dtypes, with_unsupported_dtypes
from . import backend_version
import math
@with_unsupported_dtypes({"2.15.0 and below": "uint8"}, backend_version)
def l1_normalize(
x: Union[tf.Tensor, tf.Variable],
/,
*,
axis: Optional[int] = None,
out: Optional[tf.Tensor] = None,
) -> tf.Tensor:
denorm = tf.norm(x, ord=1, axis=axis, keepdims=True)
denorm = tf.math.maximum(denorm, 1e-12)
return tf.math.divide(x, denorm)
def l2_normalize(
x: Union[tf.Tensor, tf.Variable],
/,
*,
axis: Optional[int] = None,
out: Optional[tf.Tensor] = None,
) -> tf.Tensor:
denorm = tf.norm(x, axis=axis, keepdims=True)
denorm = tf.math.maximum(denorm, 1e-12)
return tf.math.divide(x, denorm)
@with_supported_dtypes({"2.15.0 and below": ("float32", "float16")}, backend_version)
def local_response_norm(
x: Union[tf.Tensor, tf.Variable],
size,
/,
*,
bias: Optional[float] = 1.0,
alpha: Optional[float] = 1.0,
beta: Optional[float] = 0.5,
average: bool = False,
data_format: Optional[Literal["NHWC", "NCHW"]] = "NHWC",
out: Optional[tf.Tensor] = None,
) -> tf.Tensor:
if data_format == "NCHW":
x = tf.transpose(x, (0, 2, 3, 1))
# `alpha = alpha/size if average else alpha` was causing numerical instability
if average:
ret = tf.nn.local_response_normalization(
x / math.sqrt(size),
depth_radius=size // 2,
bias=bias,
alpha=alpha,
beta=beta,
) * math.sqrt(size)
else:
ret = tf.nn.local_response_normalization(
x, depth_radius=size // 2, bias=bias, alpha=alpha, beta=beta
)
if data_format == "NCHW":
ret = tf.transpose(ret, (0, 3, 1, 2))
return ret
local_response_norm.partial_mixed_handler = lambda x, size, **kwargs: size % 2 != 0
@with_unsupported_dtypes({"2.15.0 and below": ("float16", "bfloat16")}, backend_version)
def batch_norm(
x: Union[tf.Tensor, tf.Variable],
mean: Union[tf.Tensor, tf.Variable],
variance: Union[tf.Tensor, tf.Variable],
/,
*,
scale: Optional[Union[tf.Tensor, tf.Variable]] = None,
offset: Optional[Union[tf.Tensor, tf.Variable]] = None,
training: Optional[bool] = False,
eps: Optional[float] = 1e-5,
momentum: Optional[float] = 1e-1,
data_format: Optional[str] = "NSC",
out: Optional[
Tuple[
Union[tf.Tensor, tf.Variable],
Union[tf.Tensor, tf.Variable],
Union[tf.Tensor, tf.Variable],
]
] = None,
) -> Tuple[
Union[tf.Tensor, tf.Variable],
Union[tf.Tensor, tf.Variable],
Union[tf.Tensor, tf.Variable],
]:
xdims = len(x.shape)
if data_format == "NCS":
x = tf.transpose(x, perm=(0, *range(2, xdims), 1))
runningmean = mean
runningvariance = variance
if training:
n = tf.size(x) if xdims == 1 else tf.divide(tf.size(x), tf.shape(x)[-1])
n = tf.cast(n, x.dtype) if n.dtype != x.dtype else n
dims = (0, *range(1, xdims - 1))
mean = tf.math.reduce_mean(x, axis=dims)
variance = tf.math.reduce_variance(x, axis=dims)
runningmean = (1 - momentum) * runningmean + momentum * mean
runningvariance = (1 - momentum) * runningvariance + momentum * variance * n / (
n - 1
)
inv = 1.0 / tf.math.sqrt(variance + eps)
offset = 0 if offset is None else offset
if scale is not None:
inv = tf.math.multiply(inv, scale)
xnormalized = tf.math.add(tf.math.multiply(x, inv), offset)
xnormalized = tf.math.subtract(xnormalized, tf.math.multiply(mean, inv))
# the above approach is faster than tf.nn.batch_normalization
if data_format == "NCS":
xnormalized = tf.transpose(
xnormalized, perm=(0, xdims - 1, *range(1, xdims - 1))
)
return xnormalized, runningmean, runningvariance
def instance_norm(
x: Union[tf.Tensor, tf.Variable],
mean: Union[tf.Tensor, tf.Variable],
variance: Union[tf.Tensor, tf.Variable],
/,
*,
scale: Optional[Union[tf.Tensor, tf.Variable]] = None,
offset: Optional[Union[tf.Tensor, tf.Variable]] = None,
training: Optional[bool] = False,
eps: Optional[float] = 1e-5,
momentum: Optional[float] = 1e-1,
data_format: Optional[str] = "NSC",
out: Optional[
Tuple[
Union[tf.Tensor, tf.Variable],
Union[tf.Tensor, tf.Variable],
Union[tf.Tensor, tf.Variable],
]
] = None,
) -> Tuple[
Union[tf.Tensor, tf.Variable],
Union[tf.Tensor, tf.Variable],
Union[tf.Tensor, tf.Variable],
]:
# Instance Norm with (N,H,W,C) is the same as BatchNorm with (1, H, W, N*C)
xdims = len(x.shape)
if data_format == "NCS":
x = tf.transpose(x, perm=(*range(2, xdims), 0, 1))
elif data_format == "NSC":
x = tf.transpose(x, perm=(*range(1, xdims - 1), 0, xdims - 1))
else:
raise ValueError(f"Invalid data_format: {data_format}.")
N = x.shape[-2]
C = x.shape[-1]
S = x.shape[0:-2]
x = tf.reshape(x, (1, *S, N * C))
mean = tf.tile(mean, [N])
variance = tf.tile(variance, [N])
if scale is not None:
scale = tf.tile(scale, [N])
if offset is not None:
offset = tf.tile(offset, [N])
xnormalized, runningmean, runningvariance = batch_norm(
x,
mean,
variance,
scale=scale,
offset=offset,
training=training,
eps=eps,
momentum=momentum,
)
xnormalized = tf.reshape(xnormalized, (*S, N, C))
if data_format == "NCS":
xnormalized = tf.transpose(
xnormalized, perm=(xdims - 2, xdims - 1, *range(0, xdims - 2))
)
else:
xnormalized = tf.transpose(
xnormalized, perm=(xdims - 2, *range(0, xdims - 2), xdims - 1)
)
return (
xnormalized,
tf.reduce_mean(tf.reshape(runningmean, (N, C)), axis=0),
tf.reduce_mean(tf.reshape(runningvariance, (N, C)), axis=0),
)
def lp_normalize(
x: Union[tf.Tensor, tf.Variable],
/,
*,
p: float = 2,
axis: Optional[int] = None,
out: Optional[tf.Tensor] = None,
) -> tf.Tensor:
denorm = tf.norm(x, ord=p, axis=axis, keepdims=True)
denorm = tf.math.maximum(denorm, 1e-12)
return tf.math.divide(x, denorm)
| ivy/ivy/functional/backends/tensorflow/experimental/norms.py/0 | {
"file_path": "ivy/ivy/functional/backends/tensorflow/experimental/norms.py",
"repo_id": "ivy",
"token_count": 2993
} | 23 |
# global
import tensorflow as tf
from typing import Tuple, Union, Optional
from collections import namedtuple
from ivy.func_wrapper import with_unsupported_dtypes
from . import backend_version
@with_unsupported_dtypes({"2.15.0 and below": ("complex",)}, backend_version)
def unique_all(
x: Union[tf.Tensor, tf.Variable],
/,
*,
axis: Optional[int] = None,
by_value: bool = True,
) -> Tuple[
Union[tf.Tensor, tf.Variable],
Union[tf.Tensor, tf.Variable],
Union[tf.Tensor, tf.Variable],
Union[tf.Tensor, tf.Variable],
]:
Results = namedtuple(
"Results",
["values", "indices", "inverse_indices", "counts"],
)
if axis is None:
x = tf.reshape(x, shape=(-1,))
axis = 0
values, inverse_indices, counts = tf.raw_ops.UniqueWithCountsV2(
x=x,
axis=tf.constant([axis], dtype=tf.int32),
)
tensor_list = x.numpy().tolist()
if x.dtype.is_floating and tf.reduce_any(tf.math.is_nan(values)):
unique_nan = tf.math.is_nan(values)
nan_index = tf.reshape(tf.where(tf.math.is_nan(x)), [-1])
non_nan_index = tf.experimental.numpy.array(
[tensor_list.index(val) for val in values if not tf.math.is_nan(val)]
)
indices = tf.experimental.numpy.full(
fill_value=float("NaN"), shape=values.shape
).numpy()
indices[unique_nan] = nan_index
indices[~unique_nan] = non_nan_index
else:
decimal = tf.range(tf.size(inverse_indices)) / tf.size(inverse_indices)
inv_sorted = tf.argsort(
tf.cast(inverse_indices, dtype=decimal.dtype) + decimal
).numpy()
total_counts = tf.concat(
[tf.zeros((1,), dtype=counts.dtype), tf.cumsum(counts, axis=0)[:-1]], 0
)
indices = inv_sorted[total_counts]
if by_value:
values_ = tf.experimental.numpy.moveaxis(values, axis, 0)
values_ = tf.reshape(values_, (values_.shape[0], -1))
sort_idx = tf.stack(
[i[0] for i in sorted(enumerate(values_), key=lambda x: tuple(x[1]))]
)
sort_idx = tf.cast(sort_idx, tf.int32)
values = tf.gather(values, sort_idx, axis=axis)
counts = tf.gather(counts, sort_idx)
indices = tf.gather(indices, sort_idx)
inv_sort_idx = tf.math.invert_permutation(sort_idx)
inverse_indices = tf.map_fn(
lambda y: tf.gather(inv_sort_idx, y), inverse_indices
)
return Results(
tf.cast(values, dtype=x.dtype),
tf.cast(indices, dtype=tf.int64),
tf.cast(inverse_indices, dtype=tf.int64),
tf.cast(counts, dtype=tf.int64),
)
@with_unsupported_dtypes({"2.15.0 and below": ("complex",)}, backend_version)
def unique_counts(
x: Union[tf.Tensor, tf.Variable],
/,
) -> Tuple[Union[tf.Tensor, tf.Variable], Union[tf.Tensor, tf.Variable]]:
Results = namedtuple("Results", ["values", "counts"])
v, _, c = tf.unique_with_counts(tf.sort(tf.reshape(x, [-1])))
v = tf.cast(v, dtype=x.dtype)
c = tf.cast(c, dtype=tf.int64)
return Results(v, c)
@with_unsupported_dtypes({"2.15.0 and below": ("complex",)}, backend_version)
def unique_inverse(
x: Union[tf.Tensor, tf.Variable],
/,
*,
axis: Optional[int] = None,
) -> Tuple[Union[tf.Tensor, tf.Variable], Union[tf.Tensor, tf.Variable]]:
Results = namedtuple("Results", ["values", "inverse_indices"])
if axis is None:
x = tf.reshape(x, shape=(-1,))
axis = 0
flat_tensor = tf.reshape(x, -1)
values = tf.unique(tf.sort(flat_tensor))[0]
values = tf.cast(values, dtype=x.dtype)
values_list = values.numpy().tolist()
inverse_indices = [values_list.index(val) for val in flat_tensor.numpy().tolist()]
inverse_indices = tf.reshape(tf.convert_to_tensor(inverse_indices), x.shape)
inverse_indices = tf.cast(inverse_indices, dtype=tf.int64)
return Results(values, inverse_indices)
@with_unsupported_dtypes({"2.15.0 and below": ("complex",)}, backend_version)
def unique_values(
x: Union[tf.Tensor, tf.Variable],
/,
*,
out: Optional[Union[tf.Tensor, tf.Variable]] = None,
) -> Union[tf.Tensor, tf.Variable]:
ret = tf.unique(tf.reshape(x, [-1]))[0]
return tf.sort(ret)
| ivy/ivy/functional/backends/tensorflow/set.py/0 | {
"file_path": "ivy/ivy/functional/backends/tensorflow/set.py",
"repo_id": "ivy",
"token_count": 1962
} | 24 |
# global
import torch as torch
backend_version = {"version": torch.__version__.split("+")[0]}
from .activations import *
from .converters import *
from .creation import *
from .data_type import *
from .device import *
from .elementwise import *
from .general import *
from .gradients import *
from .layers import *
from .linear_algebra import *
from .losses import *
from .manipulation import *
from .norms import *
from .random import *
from .searching import *
from .set import *
from .sorting import *
from .sparse_array import *
from .statistical import *
from .utility import *
| ivy/ivy/functional/backends/torch/experimental/__init__.py/0 | {
"file_path": "ivy/ivy/functional/backends/torch/experimental/__init__.py",
"repo_id": "ivy",
"token_count": 174
} | 25 |
# global
torch_scatter = None
from typing import Union, Optional, Sequence
import torch
# local
import ivy
from ivy.functional.ivy.statistical import _get_promoted_type_of_operands
from ivy.func_wrapper import with_unsupported_dtypes, with_supported_dtypes
from . import backend_version
# Array API Standard #
# -------------------#
@with_unsupported_dtypes({"2.2 and below": ("complex",)}, backend_version)
def min(
x: torch.Tensor,
/,
*,
axis: Optional[Union[int, Sequence[int]]] = None,
keepdims: bool = False,
initial: Optional[Union[int, float, complex]] = None,
where: Optional[torch.Tensor] = None,
out: Optional[torch.Tensor] = None,
) -> torch.Tensor:
if axis == ():
if ivy.exists(out):
return ivy.inplace_update(out, x)
else:
return x
if where is not None:
max_val = (
ivy.iinfo(x.dtype).max
if ivy.is_int_dtype(x.dtype)
else ivy.finfo(x.dtype).max
)
val = torch.ones_like(x) * max_val
val = val.type(x.dtype)
x = torch.where(where, x, val)
if not keepdims and not axis and axis != 0:
result = torch.amin(input=x, out=out)
result = torch.amin(input=x, dim=axis, keepdim=keepdims, out=out)
if initial is not None:
initial = torch.tensor(initial, dtype=x.dtype)
result = torch.minimum(result, initial)
return result
min.support_native_out = True
def max(
x: torch.Tensor,
/,
*,
axis: Optional[Union[int, Sequence[int]]] = None,
keepdims: bool = False,
out: Optional[torch.Tensor] = None,
) -> torch.Tensor:
if torch.is_complex(x):
const = torch.tensor(1j, device=x.device, dtype=x.dtype)
real_max = torch.max(x.real, dim=axis, keepdim=keepdims).values
min_val = torch.finfo(x.real.dtype).min
imag = torch.where(x.real == real_max, x.imag, min_val)
# we consider the number with the biggest real and imag part
img_max = torch.max(imag, dim=axis, keepdim=keepdims).values
img_max = img_max.to(x.dtype)
return torch.add(real_max.to(x.dtype), torch.multiply(img_max, const))
if axis == ():
if ivy.exists(out):
return ivy.inplace_update(out, x)
else:
return x
if not keepdims and not axis and axis != 0:
return torch.amax(input=x, out=out)
return torch.amax(input=x, dim=axis, keepdim=keepdims, out=out)
max.support_native_out = True
@with_supported_dtypes({"2.2 and below": ("float", "complex")}, backend_version)
def mean(
x: torch.Tensor,
/,
*,
axis: Optional[Union[int, Sequence[int]]] = None,
keepdims: bool = False,
out: Optional[torch.Tensor] = None,
) -> torch.Tensor:
if axis is None:
num_dims = len(x.shape)
axis = list(range(num_dims))
if axis in [(), []]:
if ivy.exists(out):
return ivy.inplace_update(out, x)
else:
return x
return torch.mean(x, dim=axis, keepdim=keepdims, out=out)
mean.support_native_out = True
def _infer_dtype(dtype: torch.dtype) -> torch.dtype:
default_dtype = ivy.infer_default_dtype(dtype)
if default_dtype in ivy.valid_dtypes:
if ivy.dtype_bits(dtype) < ivy.dtype_bits(default_dtype):
return ivy.as_native_dtype(default_dtype)
return ivy.as_native_dtype(dtype)
# Function does support uint8, but allowing support for unsigned will cause
# the function to break the upcasting rule defined in the Array API Standard
@with_unsupported_dtypes(
{
"2.2 and below": ("uint8", "float16", "bfloat16"),
},
backend_version,
)
def prod(
x: torch.Tensor,
/,
*,
axis: Optional[Union[int, Sequence[int]]] = None,
dtype: Optional[torch.dtype] = None,
keepdims: bool = False,
out: Optional[torch.Tensor] = None,
) -> torch.Tensor:
dtype = ivy.as_native_dtype(dtype)
if dtype is None:
dtype = _infer_dtype(x.dtype)
if axis == ():
return x.type(dtype)
if axis is None:
return torch.prod(input=x, dtype=dtype)
if isinstance(axis, (tuple, list)):
for i in axis:
x = torch.prod(x, i, keepdim=keepdims, dtype=dtype)
return x
return torch.prod(x, axis, keepdim=keepdims, dtype=dtype)
@with_unsupported_dtypes(
{"2.2 and below": ("int8", "int16", "int32", "int64", "float16")},
backend_version,
)
def std(
x: torch.Tensor,
/,
*,
axis: Optional[Union[int, Sequence[int]]] = None,
correction: Union[int, float] = 0,
keepdims: bool = False,
out: Optional[torch.Tensor] = None,
) -> torch.Tensor:
if axis is None:
axis = list(range(len(x.shape)))
if axis == ():
return x
axis = (axis,) if isinstance(axis, int) else tuple(axis)
if correction == 0:
return torch.std(x, dim=axis, unbiased=False, keepdim=keepdims)
elif correction == 1:
return torch.std(x, dim=axis, unbiased=True, keepdim=keepdims)
size = 1
for a in axis:
size *= x.shape[a]
if size - correction <= 0:
ret = torch.std(x, dim=axis, unbiased=False, keepdim=keepdims)
ret = ivy.full(ret.shape, float("nan"), dtype=ret.dtype)
return ret
ret = torch.mul(
torch.std(x, dim=axis, unbiased=False, keepdim=keepdims),
(size / (size - correction)) ** 0.5,
)
return ret
# Function does support uint8, but allowing support for unsigned will cause
# the function to break the upcasting rule defined in the Array API Standard
@with_unsupported_dtypes(
{"2.2 and below": ("uint8", "float16", "bfloat16")}, backend_version
)
def sum(
x: torch.Tensor,
/,
*,
axis: Optional[Union[int, Sequence[int]]] = None,
dtype: Optional[torch.dtype] = None,
keepdims: Optional[bool] = False,
out: Optional[torch.Tensor] = None,
) -> torch.Tensor:
dtype = ivy.as_native_dtype(dtype)
if dtype is None and not ivy.is_bool_dtype(x):
dtype = x.dtype
if axis == ():
return x.type(dtype)
axis = tuple(axis) if isinstance(axis, list) else axis
if axis is None:
return torch.sum(input=x, dim=(), dtype=dtype, keepdim=keepdims)
return torch.sum(input=x, dim=axis, dtype=dtype, keepdim=keepdims)
def var(
x: torch.Tensor,
/,
*,
axis: Optional[Union[int, Sequence[int]]] = None,
correction: Union[int, float] = 0,
keepdims: bool = False,
out: Optional[torch.Tensor] = None,
) -> torch.Tensor:
if axis is None:
axis = list(range(len(x.shape)))
if axis == ():
return x
axis = (axis,) if isinstance(axis, int) else tuple(axis)
if correction == 0:
return torch.var(x, dim=axis, unbiased=False, keepdim=keepdims)
elif correction == 1:
return torch.var(x, dim=axis, unbiased=True, keepdim=keepdims)
size = 1
for a in axis:
size *= x.shape[a]
if size - correction <= 0:
ret = torch.var(x, dim=axis, unbiased=False, keepdim=keepdims)
ret = ivy.full(ret.shape, float("nan"), dtype=ret.dtype)
return ret
else:
return torch.mul(
torch.var(x, dim=axis, unbiased=False, keepdim=keepdims),
(size / (size - correction)),
).to(x.dtype)
# Extra #
# ----- #
# Function does support uint8, but allowing support for unsigned will cause
# the function to break the upcasting rule defined in the Array API Standard
# TODO: bfloat16 support is added in PyTorch 1.12.1
@with_unsupported_dtypes(
{
"2.2 and below": ("uint8", "float16", "bfloat16", "bool"),
},
backend_version,
)
def cumprod(
x: torch.Tensor,
/,
*,
axis: int = 0,
exclusive: bool = False,
reverse: bool = False,
dtype: Optional[torch.dtype] = None,
out: Optional[torch.Tensor] = None,
) -> torch.Tensor:
dtype = ivy.as_native_dtype(dtype)
if dtype is None:
dtype = _infer_dtype(x.dtype)
if not (exclusive or reverse):
return torch.cumprod(x, axis, dtype=dtype, out=out)
elif exclusive and reverse:
x = torch.cumprod(torch.flip(x, dims=(axis,)), axis, dtype=dtype)
x = torch.transpose(x, axis, -1)
x = torch.concat((torch.ones_like(x[..., -1:]), x[..., :-1]), -1)
x = torch.transpose(x, axis, -1)
ret = torch.flip(x, dims=(axis,))
elif exclusive:
x = torch.transpose(x, axis, -1)
x = torch.cat((torch.ones_like(x[..., -1:]), x[..., :-1]), -1)
x = torch.cumprod(x, -1, dtype=dtype)
ret = torch.transpose(x, axis, -1)
else:
x = torch.cumprod(torch.flip(x, dims=(axis,)), axis, dtype=dtype)
ret = torch.flip(x, dims=(axis,))
if ivy.exists(out):
return ivy.inplace_update(out, ret)
return ret
cumprod.support_native_out = True
# Function does support uint8, but allowing support for unsigned will cause
# the function to break the upcasting rule defined in the Array API Standard
# TODO: bfloat16 support is added in PyTorch 1.12.1
@with_unsupported_dtypes(
{
"1.12.1 and below": ("uint8", "bool", "float16", "bfloat16"),
"1.12.1 and above": ("uint8", "bool", "float16"),
},
backend_version,
)
def cumsum(
x: torch.Tensor,
axis: int = 0,
exclusive: bool = False,
reverse: bool = False,
*,
dtype: Optional[torch.dtype] = None,
out: Optional[torch.Tensor] = None,
) -> torch.Tensor:
dtype = ivy.as_native_dtype(dtype)
if dtype is None:
if ivy.is_int_dtype(x.dtype):
dtype = ivy.promote_types(x.dtype, ivy.default_int_dtype(as_native=True))
dtype = _infer_dtype(x.dtype)
if exclusive or reverse:
if exclusive and reverse:
x = torch.cumsum(torch.flip(x, dims=(axis,)), axis, dtype=dtype)
x = torch.transpose(x, axis, -1)
x = torch.concat((torch.zeros_like(x[..., -1:]), x[..., :-1]), -1)
x = torch.transpose(x, axis, -1)
res = torch.flip(x, dims=(axis,))
elif exclusive:
x = torch.transpose(x, axis, -1)
x = torch.cat((torch.zeros_like(x[..., -1:]), x[..., :-1]), -1)
x = torch.cumsum(x, -1, dtype=dtype)
res = torch.transpose(x, axis, -1)
else:
x = torch.cumsum(torch.flip(x, dims=(axis,)), axis, dtype=dtype)
res = torch.flip(x, dims=(axis,))
if ivy.exists(out):
return ivy.inplace_update(out, res)
return res
return torch.cumsum(x, axis, dtype=dtype, out=out)
cumsum.support_native_out = True
@with_unsupported_dtypes(
{"2.2 and below": ("float16",)},
backend_version,
)
def einsum(
equation: str,
*operands: torch.Tensor,
out: Optional[torch.Tensor] = None,
) -> torch.Tensor:
dtype = _get_promoted_type_of_operands(operands)
return ivy.astype(torch.einsum(equation, *operands), dtype, copy=False)
| ivy/ivy/functional/backends/torch/statistical.py/0 | {
"file_path": "ivy/ivy/functional/backends/torch/statistical.py",
"repo_id": "ivy",
"token_count": 4943
} | 26 |
jax_enable_x64 = False
def update(value, toggle):
global jax_enable_x64
if value == "jax_enable_x64":
jax_enable_x64 = toggle
| ivy/ivy/functional/frontends/jax/config.py/0 | {
"file_path": "ivy/ivy/functional/frontends/jax/config.py",
"repo_id": "ivy",
"token_count": 64
} | 27 |
# local
import ivy
from ivy.functional.frontends.jax import Array
from ivy.functional.frontends.jax.func_wrapper import to_ivy_arrays_and_back
from ivy.func_wrapper import with_unsupported_dtypes, with_supported_dtypes
from ivy.functional.frontends.jax.numpy import promote_types_of_jax_inputs
from ivy.functional.frontends.numpy.linalg import lstsq as numpy_lstsq
@to_ivy_arrays_and_back
def cholesky(a):
return ivy.cholesky(a)
@to_ivy_arrays_and_back
def cond(x, p=None):
return ivy.cond(x, p=p)
@to_ivy_arrays_and_back
def det(a):
return ivy.det(a)
@to_ivy_arrays_and_back
def eig(a):
return ivy.eig(a)
@to_ivy_arrays_and_back
def eigh(a, UPLO="L", symmetrize_input=True):
def symmetrize(x):
# TODO : Take Hermitian transpose after complex numbers added
return (x + ivy.swapaxes(x, -1, -2)) / 2
if symmetrize_input:
a = symmetrize(a)
return ivy.eigh(a, UPLO=UPLO)
@to_ivy_arrays_and_back
def eigvals(a):
return ivy.eigvals(a)
@to_ivy_arrays_and_back
def eigvalsh(a, UPLO="L"):
return ivy.eigvalsh(a, UPLO=UPLO)
@to_ivy_arrays_and_back
def inv(a):
return ivy.inv(a)
# TODO: replace this with function from API
# As the composition provides numerically unstable results
@to_ivy_arrays_and_back
def lstsq(a, b, rcond=None, *, numpy_resid=False):
if numpy_resid:
return numpy_lstsq(a, b, rcond=rcond)
least_squares_solution = ivy.matmul(
ivy.pinv(a, rtol=1e-15).astype(ivy.float64), b.astype(ivy.float64)
)
residuals = ivy.sum((b - ivy.matmul(a, least_squares_solution)) ** 2).astype(
ivy.float64
)
svd_values = ivy.svd(a, compute_uv=False)
rank = ivy.matrix_rank(a).astype(ivy.int32)
return (least_squares_solution, residuals, rank, svd_values[0])
@to_ivy_arrays_and_back
def matrix_power(a, n):
return ivy.matrix_power(a, n)
@to_ivy_arrays_and_back
def matrix_rank(M, tol=None):
return ivy.matrix_rank(M, atol=tol)
@to_ivy_arrays_and_back
def multi_dot(arrays, *, precision=None):
return ivy.multi_dot(arrays)
@to_ivy_arrays_and_back
@with_supported_dtypes(
{"0.4.24 and below": ("float32", "float64")},
"jax",
)
def norm(x, ord=None, axis=None, keepdims=False):
if ord is None:
ord = 2
if type(axis) in [list, tuple] and len(axis) == 2:
return Array(ivy.matrix_norm(x, ord=ord, axis=axis, keepdims=keepdims))
return Array(ivy.vector_norm(x, ord=ord, axis=axis, keepdims=keepdims))
@to_ivy_arrays_and_back
def pinv(a, rcond=None):
return ivy.pinv(a, rtol=rcond)
@to_ivy_arrays_and_back
def qr(a, mode="reduced"):
return ivy.qr(a, mode=mode)
@to_ivy_arrays_and_back
def slogdet(a, method=None):
return ivy.slogdet(a)
@to_ivy_arrays_and_back
def solve(a, b):
return ivy.solve(a, b)
@to_ivy_arrays_and_back
def svd(a, /, *, full_matrices=True, compute_uv=True, hermitian=None):
if not compute_uv:
return ivy.svdvals(a)
return ivy.svd(a, full_matrices=full_matrices)
@to_ivy_arrays_and_back
@with_unsupported_dtypes({"0.4.24 and below": ("float16", "bfloat16")}, "jax")
def tensorinv(a, ind=2):
old_shape = ivy.shape(a)
prod = 1
if ind > 0:
invshape = old_shape[ind:] + old_shape[:ind]
for k in old_shape[ind:]:
prod *= k
else:
raise ValueError("Invalid ind argument.")
a = ivy.reshape(a, shape=(prod, -1))
ia = ivy.inv(a)
new_shape = (*invshape,)
return Array(ivy.reshape(ia, shape=new_shape))
@to_ivy_arrays_and_back
def tensorsolve(a, b, axes=None):
a, b = promote_types_of_jax_inputs(a, b)
return ivy.tensorsolve(a, b, axes=axes)
| ivy/ivy/functional/frontends/jax/numpy/linalg.py/0 | {
"file_path": "ivy/ivy/functional/frontends/jax/numpy/linalg.py",
"repo_id": "ivy",
"token_count": 1730
} | 28 |
class Tensor:
pass
| ivy/ivy/functional/frontends/mindspore/tensor.py/0 | {
"file_path": "ivy/ivy/functional/frontends/mindspore/tensor.py",
"repo_id": "ivy",
"token_count": 10
} | 29 |
import ivy
from ivy.functional.frontends.numpy.func_wrapper import (
to_ivy_arrays_and_back,
handle_numpy_dtype,
)
@to_ivy_arrays_and_back
def diag(v, k=0):
return ivy.diag(v, k=k)
# diagflat
@to_ivy_arrays_and_back
def diagflat(v, k=0):
ret = ivy.diagflat(v, offset=k)
while len(ivy.shape(ret)) < 2:
ret = ret.expand_dims(axis=0)
return ret
@handle_numpy_dtype
@to_ivy_arrays_and_back
def tri(N, M=None, k=0, dtype="float64", *, like=None):
if M is None:
M = N
ones = ivy.ones((N, M), dtype=dtype)
return ivy.tril(ones, k=k)
@to_ivy_arrays_and_back
def tril(m, k=0):
return ivy.tril(m, k=k)
@to_ivy_arrays_and_back
def triu(m, k=0):
return ivy.triu(m, k=k)
@to_ivy_arrays_and_back
def vander(x, N=None, increasing=False):
if ivy.is_float_dtype(x):
x = x.astype(ivy.float64)
elif ivy.is_bool_dtype or ivy.is_int_dtype(x):
x = x.astype(ivy.int64)
return ivy.vander(x, N=N, increasing=increasing)
| ivy/ivy/functional/frontends/numpy/creation_routines/building_matrices.py/0 | {
"file_path": "ivy/ivy/functional/frontends/numpy/creation_routines/building_matrices.py",
"repo_id": "ivy",
"token_count": 517
} | 30 |
import ivy
from ivy.functional.frontends.numpy.func_wrapper import (
to_ivy_arrays_and_back,
inputs_to_ivy_arrays,
handle_numpy_out,
)
@to_ivy_arrays_and_back
@handle_numpy_out
def compress(condition, a, axis=None, out=None):
condition_arr = ivy.asarray(condition).astype(bool)
if condition_arr.ndim != 1:
raise ivy.utils.exceptions.IvyException("Condition must be a 1D array")
if axis is None:
arr = ivy.asarray(a).flatten()
axis = 0
else:
arr = ivy.moveaxis(a, axis, 0)
if condition_arr.shape[0] > arr.shape[0]:
raise ivy.utils.exceptions.IvyException(
"Condition contains entries that are out of bounds"
)
arr = arr[: condition_arr.shape[0]]
return ivy.moveaxis(arr[condition_arr], 0, axis)
def diag(v, k=0):
return ivy.diag(v, k=k)
@to_ivy_arrays_and_back
def diagonal(a, offset, axis1, axis2):
return ivy.diagonal(a, offset=offset, axis1=axis1, axis2=axis2)
@to_ivy_arrays_and_back
def fill_diagonal(a, val, wrap=False):
if a.ndim < 2:
raise ValueError("array must be at least 2-d")
end = None
if a.ndim == 2:
# Explicit, fast formula for the common case. For 2-d arrays, we
# accept rectangular ones.
step = a.shape[1] + 1
# This is needed to don't have tall matrix have the diagonal wrap.
if not wrap:
end = a.shape[1] * a.shape[1]
else:
# For more than d=2, the strided formula is only valid for arrays with
# all dimensions equal, so we check first.
if not ivy.all(ivy.diff(a.shape) == 0):
raise ValueError("All dimensions of input must be of equal length")
step = 1 + ivy.sum(ivy.cumprod(a.shape[:-1]))
# Write the value out into the diagonal.
shape = a.shape
a = ivy.reshape(a, a.size)
a[:end:step] = val
a = ivy.reshape(a, shape)
@to_ivy_arrays_and_back
def indices(dimensions, dtype=int, sparse=False):
dimensions = tuple(dimensions)
N = len(dimensions)
shape = (1,) * N
if sparse:
res = ()
else:
res = ivy.empty((N,) + dimensions, dtype=dtype)
for i, dim in enumerate(dimensions):
idx = ivy.arange(dim, dtype=dtype).reshape(shape[:i] + (dim,) + shape[i + 1 :])
if sparse:
res = res + (idx,)
else:
res[i] = idx
return res
@inputs_to_ivy_arrays
def put_along_axis(arr, indices, values, axis):
ivy.put_along_axis(arr, indices, values, axis)
@to_ivy_arrays_and_back
@handle_numpy_out
def take(a, indices, /, *, axis=None, out=None, mode="raise"):
return ivy.take(a, indices, axis=axis, out=out, mode=mode)
@to_ivy_arrays_and_back
def take_along_axis(arr, indices, axis):
return ivy.take_along_axis(arr, indices, axis)
@to_ivy_arrays_and_back
def tril_indices(n, k=0, m=None):
return ivy.tril_indices(n, m, k)
# unravel_index
@to_ivy_arrays_and_back
def unravel_index(indices, shape, order="C"):
ret = [x.astype("int64") for x in ivy.unravel_index(indices, shape)]
return tuple(ret)
| ivy/ivy/functional/frontends/numpy/indexing_routines/indexing_like_operations.py/0 | {
"file_path": "ivy/ivy/functional/frontends/numpy/indexing_routines/indexing_like_operations.py",
"repo_id": "ivy",
"token_count": 1373
} | 31 |
# global
import ivy
import numbers
from ivy.functional.frontends.numpy.func_wrapper import (
to_ivy_arrays_and_back,
from_zero_dim_arrays_to_scalar,
handle_numpy_out,
)
import ivy.functional.frontends.numpy as np_frontend
@handle_numpy_out
@to_ivy_arrays_and_back
@from_zero_dim_arrays_to_scalar
def all(
a,
axis=None,
out=None,
keepdims=False,
*,
where=None,
):
axis = tuple(axis) if isinstance(axis, list) else axis
if where is not None:
a = ivy.where(where, a, True)
ret = ivy.all(a, axis=axis, keepdims=keepdims, out=out)
return ret
@handle_numpy_out
@to_ivy_arrays_and_back
@from_zero_dim_arrays_to_scalar
def any(
a,
axis=None,
out=None,
keepdims=False,
*,
where=None,
):
axis = tuple(axis) if isinstance(axis, list) else axis
if where is not None:
a = ivy.where(where, a, False)
ret = ivy.any(a, axis=axis, keepdims=keepdims, out=out)
return ret
@to_ivy_arrays_and_back
def iscomplex(x):
return ivy.bitwise_invert(ivy.isreal(x))
@to_ivy_arrays_and_back
def iscomplexobj(x):
if x.ndim == 0:
return ivy.is_complex_dtype(ivy.dtype(x))
for ele in x:
return bool(ivy.is_complex_dtype(ivy.dtype(ele)))
@to_ivy_arrays_and_back
def isfortran(a):
return a.flags.fnc
@to_ivy_arrays_and_back
def isreal(x):
return ivy.isreal(x)
@to_ivy_arrays_and_back
def isrealobj(x: any):
return not ivy.is_complex_dtype(ivy.dtype(x))
@to_ivy_arrays_and_back
def isscalar(element):
return isinstance(
element,
(
int,
float,
complex,
bool,
bytes,
str,
memoryview,
numbers.Number,
np_frontend.generic,
),
)
| ivy/ivy/functional/frontends/numpy/logic/truth_value_testing.py/0 | {
"file_path": "ivy/ivy/functional/frontends/numpy/logic/truth_value_testing.py",
"repo_id": "ivy",
"token_count": 899
} | 32 |
# local
import ivy
from ivy.functional.frontends.numpy.func_wrapper import to_ivy_arrays_and_back
@to_ivy_arrays_and_back
def flip(m, axis=None):
return ivy.flip(m, axis=axis, out=None)
@to_ivy_arrays_and_back
def fliplr(m):
return ivy.fliplr(m, out=None)
@to_ivy_arrays_and_back
def flipud(m):
return ivy.flipud(m, out=None)
@to_ivy_arrays_and_back
def roll(a, shift, axis=None):
return ivy.roll(a, shift, axis=axis)
@to_ivy_arrays_and_back
def rot90(m, k=1, axes=(0, 1)):
return ivy.rot90(m, k=k, axes=axes)
| ivy/ivy/functional/frontends/numpy/manipulation_routines/rearranging_elements.py/0 | {
"file_path": "ivy/ivy/functional/frontends/numpy/manipulation_routines/rearranging_elements.py",
"repo_id": "ivy",
"token_count": 257
} | 33 |
# global
import ivy
# local
from ivy.functional.frontends.numpy.func_wrapper import (
to_ivy_arrays_and_back,
handle_numpy_casting,
handle_numpy_dtype,
from_zero_dim_arrays_to_scalar,
handle_numpy_out,
)
# --- Helpers --- #
# --------------- #
@handle_numpy_out
@handle_numpy_dtype
@to_ivy_arrays_and_back
@handle_numpy_casting
@from_zero_dim_arrays_to_scalar
def _arccos(
x,
/,
out=None,
*,
where=True,
casting="same_kind",
order="K",
dtype=None,
subok=True,
):
ret = ivy.acos(x, out=out)
if ivy.is_array(where):
ret = ivy.where(where, ret, ivy.default(out, ivy.zeros_like(ret)), out=out)
return ret
@handle_numpy_out
@handle_numpy_dtype
@to_ivy_arrays_and_back
@handle_numpy_casting
@from_zero_dim_arrays_to_scalar
def _arccosh(
x,
/,
out=None,
*,
where=True,
casting="same_kind",
order="K",
dtype=None,
subok=True,
):
ret = ivy.acosh(x, out=out)
if ivy.is_array(where):
ret = ivy.where(where, ret, ivy.default(out, ivy.zeros_like(ret)), out=out)
return ret
@handle_numpy_out
@handle_numpy_dtype
@to_ivy_arrays_and_back
@handle_numpy_casting
@from_zero_dim_arrays_to_scalar
def _arcsin(
x,
/,
out=None,
*,
where=True,
casting="same_kind",
order="K",
dtype=None,
subok=True,
):
ret = ivy.asin(x, out=out)
if ivy.is_array(where):
ret = ivy.where(where, ret, ivy.default(out, ivy.zeros_like(ret)), out=out)
return ret
@handle_numpy_out
@handle_numpy_dtype
@to_ivy_arrays_and_back
@handle_numpy_casting
@from_zero_dim_arrays_to_scalar
def _arctan(
x,
/,
out=None,
*,
where=True,
casting="same_kind",
order="K",
dtype=None,
subok=True,
):
ret = ivy.atan(x, out=out)
if ivy.is_array(where):
ret = ivy.where(where, ret, ivy.default(out, ivy.zeros_like(ret)), out=out)
return ret
# arctan2
@handle_numpy_out
@handle_numpy_dtype
@to_ivy_arrays_and_back
@handle_numpy_casting
@from_zero_dim_arrays_to_scalar
def _arctan2(
x1,
x2,
/,
out=None,
*,
where=True,
casting="same_kind",
order="K",
dtype=None,
subok=True,
):
ret = ivy.atan2(x1, x2, out=out)
if ivy.is_array(where):
ret = ivy.where(where, ret, ivy.default(out, ivy.zeros_like(ret)), out=out)
return ret
@handle_numpy_out
@handle_numpy_dtype
@to_ivy_arrays_and_back
@handle_numpy_casting
@from_zero_dim_arrays_to_scalar
def _cos(
x,
/,
out=None,
*,
where=True,
casting="same_kind",
order="k",
dtype=None,
subok=True,
):
ret = ivy.cos(x, out=out)
if ivy.is_array(where):
ret = ivy.where(where, ret, ivy.default(out, ivy.zeros_like(ret)), out=out)
return ret
@handle_numpy_out
@handle_numpy_dtype
@to_ivy_arrays_and_back
@handle_numpy_casting
@from_zero_dim_arrays_to_scalar
def _deg2rad(
x,
/,
out=None,
*,
where=True,
casting="same_kind",
order="K",
dtype=None,
subok=True,
signature=None,
extobj=None,
):
ret = ivy.deg2rad(x, out=out)
if ivy.is_array(where):
ret = ivy.where(where, ret, ivy.default(out, ivy.zeros_like(ret)), out=out)
return ret
@handle_numpy_out
@handle_numpy_dtype
@to_ivy_arrays_and_back
@handle_numpy_casting
@from_zero_dim_arrays_to_scalar
def _degrees(
x,
/,
out=None,
*,
where=True,
casting="same_kind",
order="K",
dtype=None,
subok=True,
):
ret = ivy.rad2deg(x, out=out)
if ivy.is_array(where):
ret = ivy.where(where, ret, ivy.default(out, ivy.zeros_like(ret)), out=out)
return ret
@handle_numpy_out
@handle_numpy_dtype
@to_ivy_arrays_and_back
@handle_numpy_casting
@from_zero_dim_arrays_to_scalar
def _rad2deg(
x,
/,
out=None,
*,
where=True,
casting="same_kind",
order="K",
dtype=None,
subok=True,
):
ret = ivy.rad2deg(x, out=out)
if ivy.is_array(where):
ret = ivy.where(where, ret, ivy.default(out, ivy.zeros_like(ret)), out=out)
return ret
@handle_numpy_out
@handle_numpy_dtype
@to_ivy_arrays_and_back
@handle_numpy_casting
@from_zero_dim_arrays_to_scalar
def _sin(
x,
/,
out=None,
*,
where=True,
casting="same_kind",
order="k",
dtype=None,
subok=True,
):
ret = ivy.sin(x, out=out)
if ivy.is_array(where):
ret = ivy.where(where, ret, ivy.default(out, ivy.zeros_like(ret)), out=out)
return ret
@handle_numpy_out
@handle_numpy_dtype
@to_ivy_arrays_and_back
@handle_numpy_casting
@from_zero_dim_arrays_to_scalar
def _tan(
x,
/,
out=None,
*,
where=True,
casting="same_kind",
order="K",
dtype=None,
subok=True,
):
ret = ivy.tan(x, out=out)
if ivy.is_array(where):
ret = ivy.where(where, ret, ivy.default(out, ivy.zeros_like(ret)), out=out)
return ret
| ivy/ivy/functional/frontends/numpy/mathematical_functions/trigonometric_functions.py/0 | {
"file_path": "ivy/ivy/functional/frontends/numpy/mathematical_functions/trigonometric_functions.py",
"repo_id": "ivy",
"token_count": 2472
} | 34 |
# global
import ivy
from ivy.functional.frontends.numpy.func_wrapper import to_ivy_arrays_and_back
@to_ivy_arrays_and_back
def argsort(
x,
/,
*,
axis=-1,
kind=None,
order=None,
):
return ivy.argsort(x, axis=axis)
@to_ivy_arrays_and_back
def lexsort(keys, /, *, axis=-1):
return ivy.lexsort(keys, axis=axis)
@to_ivy_arrays_and_back
def msort(a):
return ivy.msort(a)
@to_ivy_arrays_and_back
def partition(a, kth, axis=-1, kind="introselect", order=None):
sorted_arr = ivy.sort(a, axis=axis)
for k in kth:
index_to_remove = ivy.argwhere(a == sorted_arr[k])[0, 0]
if len(a) == 1:
a = ivy.array([], dtype=a.dtype)
else:
a = ivy.concat((a[:index_to_remove], a[index_to_remove + 1 :]))
left = ivy.array([], dtype=a.dtype)
right = ivy.array([], dtype=a.dtype)
equal = ivy.array([], dtype=a.dtype)
for i in range(len(a)):
if a[i] < sorted_arr[k]:
left = ivy.concat((left, ivy.array([a[i]], dtype=a.dtype)))
elif a[i] > sorted_arr[k]:
right = ivy.concat((right, ivy.array([a[i]], dtype=a.dtype)))
else:
equal = ivy.concat((equal, ivy.array([a[i]], dtype=a.dtype)))
for j in range(len(equal)):
if len(left) == len(sorted_arr[:k]):
right = ivy.concat((right, ivy.array([equal[j]], dtype=a.dtype)))
else:
left = ivy.concat((left, ivy.array([equal[j]], dtype=a.dtype)))
a = ivy.concat((left, ivy.array([sorted_arr[k]], dtype=a.dtype), right))
return a
@to_ivy_arrays_and_back
def sort(a, axis=-1, kind=None, order=None):
return ivy.sort(a, axis=axis)
@to_ivy_arrays_and_back
def sort_complex(a):
return ivy.sort(a)
| ivy/ivy/functional/frontends/numpy/sorting_searching_counting/sorting.py/0 | {
"file_path": "ivy/ivy/functional/frontends/numpy/sorting_searching_counting/sorting.py",
"repo_id": "ivy",
"token_count": 952
} | 35 |
# global
import ivy
from ivy.func_wrapper import with_supported_dtypes
from ivy.functional.frontends.paddle.func_wrapper import (
to_ivy_arrays_and_back,
)
@to_ivy_arrays_and_back
def imag(x):
return ivy.imag(x)
@to_ivy_arrays_and_back
def is_complex(x):
return ivy.is_complex_dtype(x)
@to_ivy_arrays_and_back
def is_floating_point(x):
return ivy.is_float_dtype(x)
@to_ivy_arrays_and_back
def is_integer(x):
return ivy.is_int_dtype(x)
@to_ivy_arrays_and_back
def rank(input):
return ivy.get_num_dims(input)
@with_supported_dtypes(
{
"2.6.0 and below": (
"complex64",
"complex128",
)
},
"paddle",
)
@to_ivy_arrays_and_back
def real(x):
return ivy.real(x)
| ivy/ivy/functional/frontends/paddle/attribute.py/0 | {
"file_path": "ivy/ivy/functional/frontends/paddle/attribute.py",
"repo_id": "ivy",
"token_count": 367
} | 36 |
# local
import ivy
from ivy.func_wrapper import with_supported_dtypes
import ivy.functional.frontends.paddle as paddle
from ivy.utils.exceptions import handle_exceptions
from ivy.functional.frontends.paddle.func_wrapper import (
inputs_to_ivy_arrays,
to_ivy_arrays_and_back,
)
# --- Helpers --- #
# --------------- #
# helpers
def _get_reduction_func(reduction):
if reduction == "none":
def ret(x):
return x
elif reduction == "mean":
ret = ivy.mean
elif reduction == "sum":
ret = ivy.sum
else:
raise ivy.utils.exceptions.IvyException(
f"{reduction} is not a valid value for reduction"
)
return ret
def _pairwise_distance(x1, x2, *, p=2.0, eps=1e-06, keepdim=False):
x1, x2 = paddle.promote_types_of_paddle_inputs(x1, x2)
x1_dim = len(x1.shape)
x2_dim = len(x2.shape)
if x1_dim > x2_dim:
output_dim = x1_dim
else:
output_dim = x2_dim
return ivy.vector_norm(x1 - x2 + eps, ord=p, axis=output_dim - 1, keepdims=keepdim)
# --- Main --- #
# ------------ #
@with_supported_dtypes({"2.6.0 and below": ("float32", "float64")}, "paddle")
@to_ivy_arrays_and_back
def binary_cross_entropy(input, label, weight=None, reduction="mean", name=None):
reduction = _get_reduction_func(reduction)
result = ivy.binary_cross_entropy(label, input, epsilon=0.0, reduction="none")
if weight is not None:
result = ivy.multiply(weight, result)
result = reduction(result)
return result
@with_supported_dtypes(
{"2.6.0 and below": ("float32",)},
"paddle",
)
@inputs_to_ivy_arrays
def binary_cross_entropy_with_logits(
logit,
label,
weight=None,
reduction="mean",
pos_weight=None,
name=None,
):
ret = ivy.binary_cross_entropy(
label, logit, from_logits=True, reduction="none", pos_weight=pos_weight
)
reduction = _get_reduction_func(reduction)
if weight is not None:
ret = ivy.multiply(weight, ret)
ret = reduction(ret).astype(label.dtype)
return paddle.to_tensor(ivy.atleast_1d(ret))
@handle_exceptions
@to_ivy_arrays_and_back
@with_supported_dtypes({"2.6.0 and below": ("float32", "float64")}, "paddle")
def cosine_embedding_loss(
input1, input2, label, margin=0.0, reduction="mean", name=None
):
if len(label.shape) != 1:
raise ValueError("1D target tensor expected, multi-target not supported")
if input1.shape != input2.shape:
raise ValueError(
"the shape of input tensor 1 should be equal to input tensor 2, but found"
" inputs with different sizes"
)
if len(input1.shape) > 2:
raise ValueError(
"1D target tensor expects 1D or 2D input tensors, but found inputs with"
" different sizes"
)
prod_sum = (input1 * input2).sum(axis=-1)
mag_square1 = ivy.square(input1).sum(axis=-1) + 1e-11
mag_square2 = ivy.square(input2).sum(axis=-1) + 1e-11
denom = ivy.sqrt(mag_square1 * mag_square2)
cos = prod_sum / denom
zeros = ivy.zeros_like(cos)
pos = 1 - cos
neg = ivy.clip(cos - margin, 0, 0)
out_pos = ivy.where(label == 1, pos, zeros)
out_neg = ivy.where(label == -1, neg, zeros)
out = out_pos + out_neg
if reduction == "none":
pass
if reduction == "mean":
out = ivy.mean(out)
elif reduction == "sum":
out = ivy.sum(out)
return out
@with_supported_dtypes({"2.6.0 and below": ("float32", "float64")}, "paddle")
@to_ivy_arrays_and_back
def dice_loss(input, label, epsilon=0.00001, name=None):
ivy.assertions.check_true(
len(input.shape) >= 2,
message="The rank of input should be greater than or equal to 2.",
)
ivy.assertions.check_true(
len(input.shape) == len(label.shape),
message=str(
"The rank of input and label should be equal, "
f"but received input: {len(input.shape)}, label: {len(label.shape)}."
),
)
ivy.assertions.check_true(
label.shape[-1] == 1,
message=str(
f"The last dimension of label should be 1, but received {label.shape[-1]}."
),
)
ivy.assertions.check_true(
tuple(input.shape[:-1]) == tuple(label.shape[:-1]),
message="All dimensions should be equal except the last one.",
)
ivy.assertions.check_true(
input.size > 0 and label.size > 0,
message="Any dimension of input and label cannot be equal to 0.",
)
label = ivy.squeeze(label, axis=-1)
label = ivy.one_hot(label, input.shape[-1])
reduce_dim = list(range(1, len(input.shape)))
intersect = ivy.multiply(input, label)
inse = ivy.sum(intersect, axis=reduce_dim)
dice_denominator = ivy.sum(input, axis=reduce_dim) + ivy.sum(label, axis=reduce_dim)
dice_score = 1 - inse * 2 / (dice_denominator + epsilon)
return ivy.mean(dice_score)
@with_supported_dtypes(
{"2.6.0 and below": ("float32",)},
"paddle",
)
@to_ivy_arrays_and_back
def hinge_embedding_loss(input, label, margin=1.0, reduction="mean"):
if reduction not in ["sum", "mean", "none"]:
raise ValueError(
"'reduction' in 'hinge_embedding_loss' should be 'sum', 'mean' or 'none',"
f" but received {reduction}."
)
zero_ = ivy.zeros([1], dtype=input.dtype)
loss = ivy.where(label == 1.0, input, zero_) + ivy.where(
label == -1.0, ivy.functional.ivy.activations.relu(margin - input), zero_
)
if reduction == "mean":
return ivy.mean(loss)
elif reduction == "sum":
return ivy.sum(loss)
elif reduction == "none":
return loss
@with_supported_dtypes({"2.6.0 and below": ("float32", "float64")}, "paddle")
@to_ivy_arrays_and_back
def kl_div(
input,
label,
reduction="mean",
name=None,
):
if input.shape != label.shape:
raise ValueError(
"the shape of input tensor should be equal to target tensor, but found"
" inputs with different sizes"
)
out = label * (ivy.log(label) - input)
size = ivy.shape(input)
if len(size) < 1:
size = [1]
if reduction == "mean":
out = ivy.mean(out)
elif reduction == "batchmean":
out = ivy.sum(out) / size[0]
elif reduction == "sum":
out = ivy.sum(out)
else:
pass
return out.astype(label.dtype)
@inputs_to_ivy_arrays
def l1_loss(
input,
label,
reduction="mean",
name=None,
):
sum_diff = ivy.abs(input - label)
reduction = _get_reduction_func(reduction)
out = reduction(sum_diff)
if out.shape == ():
out = out.expand_dims()
return paddle.to_tensor(out)
@with_supported_dtypes(
{"2.6.0 and below": ("float32",)},
"paddle",
)
@to_ivy_arrays_and_back
def log_loss(input, label, epsilon=0.0001, name=None):
out = -label * ivy.log(input + epsilon) - (
(1 - label) * ivy.log(1 - input + epsilon)
)
return out
@with_supported_dtypes({"2.6.0 and below": ("float32", "float64")}, "paddle")
@to_ivy_arrays_and_back
def margin_ranking_loss(input, other, label, margin=0.0, reduction="mean", name=None):
reduction = _get_reduction_func(reduction)
out = ivy.subtract(input, other)
neg_label = ivy.negative(label)
out = ivy.multiply(neg_label, out)
if margin != 0.0:
margin_var = ivy.full([1], margin, dtype=out.dtype)
out = ivy.add(out, margin_var)
out = ivy.where(out < 0, 0, out)
out = reduction(out).astype(input.dtype)
out = ivy.atleast_1d(out)
return out
@with_supported_dtypes({"2.6.0 and below": ("float32", "float64")}, "paddle")
@inputs_to_ivy_arrays
def mse_loss(input, label, reduction="mean", name=None):
reduction = _get_reduction_func(reduction)
ret = ivy.square(input - label)
ret = reduction(ret)
return paddle.to_tensor(ret)
@with_supported_dtypes({"2.6.0 and below": ("float32", "float64")}, "paddle")
@to_ivy_arrays_and_back
def multi_label_soft_margin_loss(
input, label, weight=None, reduction="mean", name=None
):
reduction = _get_reduction_func(reduction)
loss = -(
label * ivy.log(ivy.sigmoid(input))
+ (1 - label) * ivy.log(1 - ivy.sigmoid(input))
)
if weight is not None:
loss = ivy.multiply(weight, loss)
loss = ivy.mean(loss, axis=-1)
ret = reduction(loss).astype(input.dtype)
return ret
@with_supported_dtypes({"2.6.0 and below": ("float32", "float64")}, "paddle")
@to_ivy_arrays_and_back
def nll_loss(
input,
label,
weight=None,
ignore_index=-100,
reduction="mean",
):
"""Refer
https://pytorch.org/docs/stable/generated/torch.nn.NLLLoss.html#torch.nn.NLLLoss for
more on NLL(Negative log likelihood) Loss."""
if weight is None:
weight = ivy.ones(ivy.shape(input[0]))
input = ivy.log(input)
loss = ivy.zeros(ivy.shape(label))
den = 0
for i in range(0, ivy.shape(loss)[0]):
den = den + weight[label[i]]
loss[i] = -weight[label[i]] * input[i][label[i]]
output = 0.0
if reduction == "sum":
output = ivy.sum(loss)
if ignore_index >= 0 and ignore_index < ivy.shape(input)[1]:
output = output - loss[ignore_index]
return output
num = ivy.sum(loss)
output = num / den
if ignore_index >= 0 and ignore_index < ivy.shape(input)[1]:
output = output - loss[ignore_index] / den
return output
@with_supported_dtypes({"2.6.0 and below": ("float32", "float64")}, "paddle")
@to_ivy_arrays_and_back
def sigmoid_focal_loss(
logit,
label,
normalizer=None,
alpha=0.25,
gamma=2.0,
reduction="sum",
name=None,
):
if reduction not in ["sum", "mean", "none"]:
raise ValueError(
"The value of 'reduction' in sigmoid_focal_loss should be 'sum', 'mean' or"
f" 'none', but received {reduction}, which is not allowed."
)
if normalizer is not None and normalizer.ndim > 1:
raise ValueError(
"Expected zero or one dimension of normalizer in sigmoid_focal_loss but"
f" got {normalizer.ndim}."
)
if not isinstance(logit, ivy.Array):
logit = ivy.array(logit)
if not isinstance(label, ivy.Array):
label = ivy.array(label)
pred = ivy.sigmoid(logit)
loss = -(
label * alpha * ivy.pow((1 - pred), gamma) * ivy.log(pred)
+ (1 - label) * (1 - alpha) * ivy.pow(pred, gamma) * ivy.log(1 - pred)
)
if normalizer is not None:
loss /= normalizer
if reduction == "sum":
return ivy.sum(loss)
elif reduction == "mean":
return ivy.mean(loss)
return loss
@with_supported_dtypes({"2.6.0 and below": ("float32", "float64")}, "paddle")
@to_ivy_arrays_and_back
def smooth_l1_loss(
input,
label,
reduction="mean",
delta=1.0,
name=None,
):
sum_diff = ivy.abs(input - label).astype(label.dtype)
condition = sum_diff <= delta
out = ivy.where(
condition,
0.5 * ivy.pow(ivy.abs(input - label), 2).astype(label.dtype),
(delta * ivy.abs(ivy.abs(input - label))).astype(label.dtype)
- (0.5 * ivy.pow(delta, 2)).astype(label.dtype),
)
if reduction == "none":
pass
elif reduction == "mean":
out = ivy.mean(out)
elif reduction == "sum":
out = ivy.sum(out)
return out.astype(label.dtype)
@with_supported_dtypes(
{"2.6.0 and below": ("float32", "float64")},
"paddle",
)
@inputs_to_ivy_arrays
def softmax_with_cross_entropy(
logits,
label,
soft_label=False,
ignore_index=-100,
numeric_stable_mode=True,
return_softmax=False,
axis=-1,
):
input_dims = len(list(logits.shape))
if input_dims == 0:
raise ValueError("The dimension of input should be larger than zero!")
label_dims = len(list(label.shape))
if input_dims - 1 != label_dims and input_dims != label_dims:
raise ValueError(
"Expected nput_dims - 1 = label_dims or input_dims == label_dims "
f" (got nput_dims{input_dims}, label_dims{label_dims})"
)
logits = ivy.array(logits)
label = ivy.array(label)
if input_dims - 1 == label_dims:
label = ivy.expand_dims(label, axis=axis)
if numeric_stable_mode:
max_logits = ivy.max(logits, axis=axis, keepdims=True)
log_max_sum_logits = ivy.log(
ivy.sum(ivy.exp(ivy.subtract(logits, max_logits)), axis=axis, keepdims=True)
)
softmax = ivy.exp(
ivy.subtract(ivy.subtract(logits, max_logits), log_max_sum_logits)
)
else:
softmax = ivy.softmax(logits, axis=axis)
if soft_label:
loss = -ivy.sum(
ivy.multiply(
label,
ivy.subtract(
logits, ivy.log(ivy.sum(ivy.exp(logits), axis=axis, keepdims=True))
),
),
axis=axis,
keepdims=True,
)
else:
mask = ivy.not_equal(label.astype("float64"), float(ignore_index))
loss = ivy.add(
-ivy.take_along_axis(logits, label, axis),
ivy.log(ivy.sum(ivy.exp(logits), axis=axis, keepdims=True)),
)
loss = ivy.multiply(loss, mask)
if return_softmax:
return paddle.to_tensor(loss), paddle.to_tensor(softmax)
return paddle.to_tensor(loss)
@with_supported_dtypes({"2.6.0 and below": ("float32",)}, "paddle")
@to_ivy_arrays_and_back
def square_error_cost(input, label):
return ivy.square(ivy.subtract(input, label))
@with_supported_dtypes({"2.6.0 and below": ("float32", "float64")}, "paddle")
@to_ivy_arrays_and_back
def triplet_margin_loss(
input,
positive,
negative,
margin=1.0,
p=2.0,
eps=1e-06,
swap=False,
reduction="mean",
):
reduction = _get_reduction_func(reduction)
a_dim = input.ndim
p_dim = positive.ndim
n_dim = negative.ndim
ivy.assertions.check_true(
a_dim == p_dim and p_dim == n_dim,
lambda: (
"The input, positive, and negative tensors are expected to have "
f"the same number of dimensions, but got: input {a_dim}D, "
f"positive {p_dim}D, and negative {n_dim}D inputs"
),
)
dist_positive = _pairwise_distance(input, positive, p=p, eps=eps)
dist_negative = _pairwise_distance(input, negative, p=p, eps=eps)
if swap:
dist_swap = _pairwise_distance(positive, negative, p=p, eps=eps)
dist_negative = ivy.minimum(dist_negative, dist_swap)
loss = ivy.maximum(
dist_positive - dist_negative + ivy.array(margin), ivy.array(0.0)
)
loss = reduction(loss).astype(input.dtype)
return loss
| ivy/ivy/functional/frontends/paddle/nn/functional/loss.py/0 | {
"file_path": "ivy/ivy/functional/frontends/paddle/nn/functional/loss.py",
"repo_id": "ivy",
"token_count": 6740
} | 37 |
# global
from ..stat import * # noqa: F401
| ivy/ivy/functional/frontends/paddle/tensor/stat.py/0 | {
"file_path": "ivy/ivy/functional/frontends/paddle/tensor/stat.py",
"repo_id": "ivy",
"token_count": 16
} | 38 |
from .fft import *
| ivy/ivy/functional/frontends/scipy/fft/__init__.py/0 | {
"file_path": "ivy/ivy/functional/frontends/scipy/fft/__init__.py",
"repo_id": "ivy",
"token_count": 7
} | 39 |
from .optimize import *
| ivy/ivy/functional/frontends/scipy/optimize/__init__.py/0 | {
"file_path": "ivy/ivy/functional/frontends/scipy/optimize/__init__.py",
"repo_id": "ivy",
"token_count": 7
} | 40 |
from . import _classes
from ._classes import *
from . import _criterion
from ._criterion import *
from . import _splitter
from ._splitter import *
from . import _tree
from ._tree import *
| ivy/ivy/functional/frontends/sklearn/tree/__init__.py/0 | {
"file_path": "ivy/ivy/functional/frontends/sklearn/tree/__init__.py",
"repo_id": "ivy",
"token_count": 52
} | 41 |
# global
from builtins import slice as py_slice, range as py_range
# local
import ivy
from ivy.func_wrapper import with_unsupported_dtypes, with_supported_dtypes
from ivy.functional.frontends.tensorflow.func_wrapper import (
to_ivy_arrays_and_back,
handle_tf_dtype,
to_ivy_dtype,
)
from ivy.functional.frontends.tensorflow.tensor import EagerTensor
import ivy.functional.frontends.tensorflow as tf_frontend
from ivy.functional.frontends.tensorflow import check_tensorflow_casting
import functools
# --- Helpers --- #
# --------------- #
def _num_to_bit_list(value, num_dims):
return list(map(int, f"{value:0{num_dims}b}"))[::-1]
# --- Main --- #
# ------------ #
@to_ivy_arrays_and_back
def argsort(values, axis=-1, direction="ASCENDING", stable=False, name=None):
if direction == "DESCENDING":
descending = True
else:
descending = False
return ivy.argsort(values, axis=axis, descending=descending, stable=stable).astype(
"int32"
)
@to_ivy_arrays_and_back
def boolean_mask(tensor, mask, axis=None, name=None):
if axis is None or axis == 0:
return ivy.get_item(tensor, mask)
else:
n = ivy.get_num_dims(tensor)
k = ivy.get_num_dims(mask)
if axis < 0:
axis = n + axis
ivy.utils.assertions.check_less(
k + axis,
n,
allow_equal=True,
message="Value of axis must be such that axis + dim(mask) <= dim(tensor)",
as_array=False,
)
tensor_shape = ivy.shape(tensor)
range_array = ivy.arange(axis - 1, -1, -1)
for i in ivy.to_list(range_array):
mask = ivy.expand_dims(mask, axis=0)
mask = ivy.repeat(mask, tensor_shape[i], axis=0)
return ivy.get_item(tensor, mask)
@with_supported_dtypes({"2.15.0 and below": ("float32",)}, "tensorflow")
@to_ivy_arrays_and_back
def clip_by_global_norm(t_list, clip_norm, use_norm=None):
if use_norm is not None:
global_norm = use_norm
else:
global_norm = ivy.sqrt(ivy.sum([ivy.vector_norm(t) ** 2 for t in t_list]))
max_clip_ratio = ivy.maximum(clip_norm, global_norm)
return [
ivy.multiply(t, ivy.divide(clip_norm, max_clip_ratio)) for t in t_list
], global_norm
@with_supported_dtypes({"2.15.0 and below": ("float", "complex")}, "tensorflow")
@to_ivy_arrays_and_back
def clip_by_norm(t, clip_norm, axes=None):
t, clip_norm = check_tensorflow_casting(t, clip_norm)
l2sum = ivy.sum(t * t, axis=axes, keepdims=True)
pred = l2sum > 0
l2sum_safe = ivy.where(pred, l2sum, ivy.ones_like(l2sum))
l2norm = ivy.where(pred, ivy.sqrt(l2sum_safe), l2sum)
intermediate = t * clip_norm
assert (
t.shape == intermediate.shape
), f"Dimensions {t.shape} and {intermediate.shape} are not compatible"
t_clip = intermediate / ivy.maximum(l2norm, clip_norm)
return t_clip
@to_ivy_arrays_and_back
@with_unsupported_dtypes({"2.15.0 and below": ("float16",)}, "tensorflow")
def clip_by_value(t, clip_value_min, clip_value_max):
ivy.utils.assertions.check_all_or_any_fn(
clip_value_min,
clip_value_max,
fn=ivy.exists,
type="all",
message="clip_value_min and clip_value_max must exist",
)
t = ivy.array(t)
return ivy.clip(t, clip_value_min, clip_value_max)
@to_ivy_arrays_and_back
def concat(values, axis, name=None):
return ivy.concat(values, axis=axis)
@to_ivy_arrays_and_back
def cond(pred, true_fn=None, false_fn=None, name=None):
if true_fn is None:
raise TypeError("cond(): 'true_fn' argument required")
if false_fn is None:
raise TypeError("cond(): 'false_fn' argument required")
if not callable(true_fn):
raise TypeError("'true_fn' must be callable.")
if not callable(false_fn):
raise TypeError("'false_fn' must be callable.")
if pred:
return true_fn()
if not pred:
return false_fn()
@handle_tf_dtype
def constant(value, dtype=None, shape=None, name=None):
if shape is not None:
value = ivy.reshape(value, shape=shape)
if dtype is not None:
return EagerTensor(ivy.astype(value, dtype))
return EagerTensor(value)
@handle_tf_dtype
def convert_to_tensor(value, dtype=None, dtype_hint=None, name=None):
if dtype:
return tf_frontend.cast(value, dtype)
elif dtype_hint:
return tf_frontend.cast(value, dtype_hint)
if hasattr(value, "ivy_array"):
return EagerTensor(value.ivy_array)
return EagerTensor(value)
@to_ivy_arrays_and_back
def einsum(equation, *inputs, **kwargs):
return ivy.einsum(equation, *inputs)
@to_ivy_arrays_and_back
def ensure_shape(x, shape, name=None):
x = EagerTensor(x)
x.set_shape(shape)
return x
@to_ivy_arrays_and_back
def expand_dims(input, axis, name=None):
return ivy.expand_dims(input, axis=axis)
@with_unsupported_dtypes({"2.15.0 and below": ("float16", "bfloat16")}, "tensorflow")
@handle_tf_dtype
@to_ivy_arrays_and_back
def eye(num_rows, num_columns=None, batch_shape=None, dtype=ivy.float32, name=None):
return ivy.eye(num_rows, num_columns, batch_shape=batch_shape, dtype=dtype)
@to_ivy_arrays_and_back
def fill(dims, value, name=None):
return ivy.full(dims, value)
@to_ivy_arrays_and_back
def foldl(
fn,
elems,
initializer=None,
parallel_iterations=10,
swap_memory=False,
name=None,
):
ivy.utils.assertions.check_isinstance(
elems, (list, ivy.Array), "elems must be an iterable object"
)
ivy.utils.assertions.check_true(
callable(fn), f"{fn.__name__} must be a callable function"
)
if len(ivy.shape(elems)) == 0 or ivy.get_num_dims(elems) == 0:
raise ivy.utils.exceptions.IvyValueError(
"elems must be a non-empty iterable object with at least one dimension"
)
if initializer is not None:
result = functools.reduce(fn, elems, initializer)
elif initializer is None and ivy.shape(elems)[0] > 0:
result = functools.reduce(fn, elems[1:], elems[0])
else:
result = elems
if all(ivy.get_num_dims(e) == 0 for e in elems):
result = ivy.to_scalar(result)
return result
@with_unsupported_dtypes({"2.6.0 and below": ("float16", "bfloat16")}, "paddle")
@to_ivy_arrays_and_back
def foldr(
fn,
elems,
initializer=None,
parallel_iterations=10,
back_prop=True,
swap_memory=False,
name=None,
):
ivy.utils.assertions.check_isinstance(
elems, (list, ivy.Array), "elems must be an iterable object"
)
ivy.utils.assertions.check_true(
callable(fn), f"{fn.__name__} must be a callable function"
)
if len(ivy.shape(elems)) == 0 or ivy.get_num_dims(elems) == 0:
raise ivy.utils.exceptions.IvyValueError(
"elems must be a non-empty iterable object with at least one dimension"
)
elems = ivy.flip(elems)
if initializer is not None:
result = functools.reduce(fn, elems, initializer)
elif initializer is None and ivy.shape(elems)[0] > 0:
result = functools.reduce(fn, elems[1:], elems[0])
else:
result = elems
if all(ivy.get_num_dims(e) == 0 for e in elems):
result = ivy.to_scalar(result)
result = ivy.flip(result)
return result
@to_ivy_arrays_and_back
def gather(params, indices, validate_indices=None, axis=None, batch_dims=0, name=None):
if axis is None:
axis = batch_dims
else:
axis = axis % len(params.shape)
axis = max(axis, batch_dims)
return ivy.gather(params, indices, axis=axis, batch_dims=batch_dims)
@to_ivy_arrays_and_back
def gather_nd(params, indices, batch_dims=0, name=None):
return ivy.gather_nd(params, indices, batch_dims=batch_dims)
@to_ivy_arrays_and_back
def identity(input, name=None):
return ivy.copy_array(input)
@to_ivy_arrays_and_back
def identity_n(input, name=None):
return [ivy.copy_array(x) for x in input]
@to_ivy_arrays_and_back
def is_tensor(x, name=None):
return ivy.is_array(x)
@to_ivy_arrays_and_back
def linspace(start, stop, num, name=None, axis=0):
return ivy.linspace(start, stop, num, axis=axis)
@to_ivy_arrays_and_back
def meshgrid(*args, **kwargs):
sparse = False
indexing = "xy"
if "indexing" in kwargs:
indexing = kwargs["indexing"]
return ivy.meshgrid(*args, sparse=sparse, indexing=indexing)
@to_ivy_arrays_and_back
def no_op(name=None):
return
@with_supported_dtypes({"2.15.0 and below": ("float32", "float64")}, "tensorflow")
@to_ivy_arrays_and_back
def norm(tensor, ord="euclidean", axis=None, keepdims=None, name=None):
return tf_frontend.linalg.norm(
tensor, ord=ord, axis=axis, keepdims=keepdims, name=name
)
@to_ivy_arrays_and_back
def one_hot(
indices: ivy.Array,
depth: int,
on_value=None,
off_value=None,
axis=None,
dtype=None,
device=None,
out=None,
):
return ivy.one_hot(indices, depth)
@with_unsupported_dtypes({"2.15.0 and below": ("float16", "bfloat16")}, "tensorflow")
@handle_tf_dtype
@to_ivy_arrays_and_back
def ones(shape, dtype=ivy.float32, name=None):
return ivy.ones(shape, dtype=dtype)
@handle_tf_dtype
@to_ivy_arrays_and_back
def ones_like(input, dtype=None, name=None):
return ivy.ones_like(input, dtype=dtype)
@to_ivy_arrays_and_back
def pad(tensor, paddings, mode="CONSTANT", constant_values=0, name=None):
paddings = paddings.to_list() if ivy.is_array(paddings) else paddings
return ivy.pad(tensor, paddings, mode=mode.lower(), constant_values=constant_values)
@with_unsupported_dtypes({"2.15.0 and below": ("float16", "bfloat16")}, "tensorflow")
@handle_tf_dtype
@to_ivy_arrays_and_back
def range(start, limit=None, delta=1, dtype=None, name=None):
return ivy.arange(start, limit, delta, dtype=dtype)
@to_ivy_arrays_and_back
def rank(input, **kwargs):
return ivy.astype(ivy.array(input.ndim), ivy.int32)
@with_unsupported_dtypes({"2.15.0 and below": ("unsigned", "integer")}, "tensorflow")
@to_ivy_arrays_and_back
def realdiv(x, y, name=None):
x, y = check_tensorflow_casting(x, y)
return ivy.divide(x, y)
@to_ivy_arrays_and_back
def repeat(
input,
repeats,
axis=None,
name=None,
):
return ivy.repeat(input, repeats, axis=axis)
@to_ivy_arrays_and_back
def reshape(tensor, shape, name=None):
shape = shape.to_list() if ivy.is_array(shape) else shape
return ivy.reshape(tensor, shape=shape)
@to_ivy_arrays_and_back
def reverse(tensor, axis, name=None):
return ivy.flip(tensor, axis=axis)
@to_ivy_arrays_and_back
def roll(input, shift, axis, name=None):
return ivy.roll(input, shift, axis=axis)
@to_ivy_arrays_and_back
def scan(
fn,
elems,
initializer=None,
parallel_iterations=10,
back_prop=True,
swap_memory=False,
infer_shape=True,
reverse=False,
name=None,
):
elems = ivy.asarray(elems)
return ivy.associative_scan(elems, fn, reverse=reverse)
@to_ivy_arrays_and_back
def searchsorted(sorted_sequence, values, side="left", out_type="int32"):
out_type = to_ivy_dtype(out_type)
if out_type not in ["int32", "int64"]:
out_type = "int64"
return ivy.searchsorted(sorted_sequence, values, side=side, ret_dtype=out_type)
@with_supported_dtypes(
{"2.15.0 and below": ("int8", "int16", "int32", "int64")}, "tensorflow"
)
@to_ivy_arrays_and_back
def sequence_mask(lengths, maxlen=None, dtype=ivy.bool, name=None):
if maxlen is None:
maxlen = ivy.maximum(
ivy.max(lengths), ivy.max(ivy.arange(ivy.get_num_dims(lengths)))
)
maxlen = ivy.maximum(0, maxlen)
else:
maxlen = ivy.array(maxlen)
if ivy.get_num_dims(maxlen) is not None and ivy.get_num_dims(maxlen) != 0:
raise ValueError(
"Argument `maxlen` must be scalar for sequence_mask, "
f"received `maxlen` = {maxlen} "
f"with shape '{maxlen.get_shape()}' instead"
)
row_vector = ivy.arange(0, int(maxlen), 1)
matrix = ivy.expand_dims(lengths, axis=-1)
result = row_vector < matrix
if dtype is None:
return result
else:
return ivy.astype(result, dtype)
@to_ivy_arrays_and_back
def shape(input, out_type=ivy.int32, name=None):
out_type = to_ivy_dtype(out_type)
if out_type in ["int32", "int64"]:
return ivy.array(ivy.shape(input), dtype=out_type)
else:
return ivy.array(ivy.shape(input), dtype="int64")
@to_ivy_arrays_and_back
def shape_n(input, out_type=ivy.int32, name=None):
out_type = to_ivy_dtype(out_type)
if out_type in ["int32", "int64"]:
return [ivy.array(ivy.shape(i), dtype=out_type) for i in input]
else:
return [ivy.array(ivy.shape(i), dtype="int64") for i in input]
@to_ivy_arrays_and_back
def size(input, out_type=ivy.int32, name=None):
out_type = to_ivy_dtype(out_type)
shape = ivy.shape(input, as_array=True)
return ivy.astype(ivy.prod(shape), out_type, copy=False)
@to_ivy_arrays_and_back
def slice(input_, begin, size, name=None):
return strided_slice(input_, begin, begin + size)
@to_ivy_arrays_and_back
def sort(values, axis=-1, direction="ASCENDING", name=None):
descending = True
if direction == "ASCENDING":
descending = False
else:
ivy.utils.assertions.check_equal(
direction,
"DESCENDING",
message="Argument `direction` should be one of 'ASCENDING' or 'DESCENDING'",
as_array=False,
)
return ivy.sort(values, axis=axis, descending=descending)
@with_unsupported_dtypes(
{"2.15.0 and below": ("uint8", "uint16", "uint32", "uint64", "int16")}, "tensorflow"
)
@to_ivy_arrays_and_back
def split(value, num_or_size_splits, axis=0, num=None, name=None):
return ivy.split(
value, num_or_size_splits=num_or_size_splits, axis=axis, with_remainder=True
)
@to_ivy_arrays_and_back
def squeeze(input, axis=None, name=None):
return ivy.squeeze(input, axis=axis)
@to_ivy_arrays_and_back
def stack(values, axis=0, name="stack"):
return ivy.stack(values, axis=axis)
@to_ivy_arrays_and_back
def stop_gradient(input, name=None):
return ivy.stop_gradient(input)
# ToDo: find a way around for negative indexing, which torch does not support
@to_ivy_arrays_and_back
def strided_slice(
input_,
begin,
end,
strides=None,
begin_mask=0,
end_mask=0,
ellipsis_mask=0,
new_axis_mask=0,
shrink_axis_mask=0,
var=None,
name=None,
):
input_shape = list(input_.shape)
input_rank = len(input_shape)
begin_mask, end_mask, ellipsis_mask, new_axis_mask, shrink_axis_mask = list(
map(
_num_to_bit_list,
[begin_mask, end_mask, ellipsis_mask, new_axis_mask, shrink_axis_mask],
[input_rank] * 5,
)
)
begin, end, strides = map(
lambda x: ivy.array(x) if isinstance(x, int) else x, [begin, end, strides]
)
num_defined = len(begin)
strides = ivy.repeat(ivy.array(1), num_defined) if strides is None else strides
ivy.assertions.check_true(
num_defined == len(end) == len(strides),
message="`begin`, `end`, and `strides` are expected to have the same length",
)
begin, end, strides = map(
lambda x: [ivy.to_scalar(i) for i in x] if ivy.is_ivy_array(x) else x,
[begin, end, strides],
)
for i, v in enumerate(shrink_axis_mask):
if v == 1:
begin_mask[i] = 0
ellipsis_indices = [i for i, v in enumerate(ellipsis_mask) if v]
if len(ellipsis_indices) > 1:
raise ValueError("Multiple ellipses are not allowed.")
elif len(ellipsis_indices) == 1:
ellipsis_index = ellipsis_indices[0]
num_missing = input_rank - len(begin)
if num_missing == 0:
begin_mask[ellipsis_index] = 1
end_mask[ellipsis_index] = 1
shrink_axis_mask[ellipsis_index] = 0
new_axis_mask[ellipsis_index] = 0
else:
for i in py_range(ellipsis_index, ellipsis_index + num_missing + 1, 1):
if i < input_rank:
shrink_axis_mask[i] = 0
new_axis_mask[i] = 0
else:
break
if ellipsis_index >= len(begin):
begin = begin + [None] * num_missing
end = end + [None] * num_missing
strides = strides + [1] * num_missing
else:
begin = (
begin[:ellipsis_index]
+ [None] * (num_missing + 1)
+ begin[ellipsis_index + 1 :]
)
end = (
end[:ellipsis_index]
+ [None] * (num_missing + 1)
+ end[ellipsis_index + 1 :]
)
strides = (
strides[:ellipsis_index]
+ [1] * (num_missing + 1)
+ strides[ellipsis_index + 1 :]
)
full_slice = ()
for i, _ in enumerate(begin):
if new_axis_mask[i]:
full_slice += (ivy.newaxis,)
else:
b = None if begin_mask[i] else begin[i]
e = None if end_mask[i] else end[i]
s = strides[i]
if b is None and e is None:
s = 1 if ellipsis_mask[i] else s
elif shrink_axis_mask[i]:
if b is not None:
e = b + 1 if s > 0 else b - 1
else:
e = 1 if s > 0 else input_shape[i] - 2
full_slice += (py_slice(b, e, s),)
if all(i is None for i in full_slice):
full_slice += (...,)
ret = input_[full_slice]
shrink_indices = [
i
for i, v in enumerate(shrink_axis_mask)
if v and i < len(ret.shape) and ret.shape[i] == 1
]
ret = ivy.squeeze(ret, axis=shrink_indices)
return ret
@to_ivy_arrays_and_back
def tensor_scatter_nd_add(tensor, indices, updates, name=None):
zero_tensor = ivy.zeros_like(tensor)
scatter_tensor = ivy.scatter_nd(indices, updates, zero_tensor.shape)
return ivy.add(tensor, scatter_tensor)
@with_unsupported_dtypes({"2.15.0 and below": ("uint16",)}, "tensorflow")
@to_ivy_arrays_and_back
def tile(input, multiples, name=None):
return ivy.tile(input, multiples)
@to_ivy_arrays_and_back
def transpose(a, perm=None, conjugate=False, name="transpose"):
# handle conjugate when ivy supports complex numbers
if perm is not None:
return ivy.permute_dims(a, axes=perm)
n = a.ndim
perm = ivy.arange(n - 1, -1, -1)
return ivy.permute_dims(a, axes=perm)
@with_unsupported_dtypes({"2.15.0 and below": ("float16", "bfloat16")}, "tensorflow")
@to_ivy_arrays_and_back
def truncatediv(x, y, name=None):
return x.trunc_divide(y)
@with_unsupported_dtypes(
{"2.15.0 and below": ("int16", "int8", "uint8", " uint16")}, "tensorflow"
)
@to_ivy_arrays_and_back
def truncatemod(x, y):
x = ivy.broadcast_to(x, ivy.shape(y))
y = ivy.broadcast_to(y, ivy.shape(x))
return ivy.trunc(x / y) * y + (x % y)
@to_ivy_arrays_and_back
def unique(x, out_idx=ivy.int32, name=None):
ret = ivy.unique_all(x, by_value=False)
y = ret[0]
idx = ivy.astype(ret[2], out_idx)
return y, idx
@to_ivy_arrays_and_back
def unique_with_counts(x, out_idx="int32", name=None):
x = x.to_list() if ivy.is_array(x) else x
ivy.utils.assertions.check_equal(
ivy.array(x).ndim,
1,
message="unique_with_counts expects a 1D vector.",
)
ivy.utils.assertions.check_elem_in_list(
out_idx,
["int32", "int64"],
message=(
f"Value for attr 'out_idx' of {out_idx} is not in the list of allowed"
" values: [int32, int64]"
),
)
values = []
indices = []
counts = []
for element in x:
if element not in values:
values.append(element)
indices.append(len(values) - 1)
counts.append(1)
else:
index = values.index(element)
counts[index] += 1
indices.append(index)
return (
ivy.array(values),
ivy.array(indices, dtype=out_idx),
ivy.array(counts, dtype=out_idx),
)
@to_ivy_arrays_and_back
def unravel_index(indices, dims, out=None, name=None):
return ivy.unravel_index(indices, dims, out=out)
@with_unsupported_dtypes({"2.15.0 and below": ("float16", "bfloat16")}, "tensorflow")
@to_ivy_arrays_and_back
def unstack(value: ivy.Array, axis=0, num=None, name=None):
return ivy.unstack(value, axis=axis)
@to_ivy_arrays_and_back
def where(condition: ivy.Array, x=None, y=None, name=None):
if x is None and y is None:
return ivy.argwhere(condition)
else:
return ivy.where(condition, x, y)
@to_ivy_arrays_and_back
def while_loop(
cond,
body,
loop_vars,
shape_invariants=None,
parallel_iterations=10,
back_prop=True,
swap_memory=False,
maximum_iterations=None,
name=None,
):
return ivy.while_loop(test_fn=cond, body_fn=body, vars=loop_vars)
@handle_tf_dtype
@to_ivy_arrays_and_back
def zeros(shape, dtype=ivy.float32, name=None):
return ivy.zeros(shape=shape, dtype=dtype)
@with_unsupported_dtypes({"2.15.0 and below": ("float16", "bfloat16")}, "tensorflow")
@to_ivy_arrays_and_back
def zeros_initializer(shape, dtype=None, name=None):
# todo internal: fix behaviour
if dtype is None:
dtype = ivy.default_dtype()
return ivy.zeros(shape, dtype=dtype)
@handle_tf_dtype
@to_ivy_arrays_and_back
def zeros_like(input, dtype=None, name=None):
return ivy.zeros_like(input, dtype=dtype)
| ivy/ivy/functional/frontends/tensorflow/general_functions.py/0 | {
"file_path": "ivy/ivy/functional/frontends/tensorflow/general_functions.py",
"repo_id": "ivy",
"token_count": 10165
} | 42 |
import ivy
# TODO: Align behavior with tensorflow, modify so that the elements of the raggedTensor
# object are of type EagerTensor
# ensure that the values and row_splits are of type EagerTensor too
# add more initializer methods
class RaggedTensor:
def __init__(self, values, row_partition, internal=False, data=None):
if not internal:
raise ivy.utils.exceptions.IvyException(
"RaggedTensor constructor is private; please use one of the "
"factory methods instead "
"(e.g., RaggedTensor.from_row_lengths())"
)
self._values = values
self.data = data
self._row_partition = row_partition
@classmethod
def from_row_splits(cls, values, row_splits, name=None, validate=True):
# TODO : modify this, if necessary, to accept raggedTensor inputs too
if values.shape[0] != row_splits[-1] or row_splits[0] != 0:
if values.shape[0] != row_splits[-1]:
raise ivy.utils.exceptions.IvyException(
"first dimension of shape of values should be equal to the"
" last dimension of row_splits"
)
else:
raise ivy.utils.exceptions.IvyException(
"first value of row_splits should be equal to zero."
)
data = [
values[row_splits[i] : row_splits[i + 1], :]
for i in range(len(row_splits) - 1)
]
return cls(values=values, row_partition=row_splits, internal=True, data=data)
@classmethod
def from_row_lengths(
cls,
values,
row_lengths,
name=None,
):
# TODO : modify this, if necessary, to accept raggedTensor inputs too
if sum(row_lengths) != values.shape[0]:
raise ivy.utils.exceptions.IvyException(
"first dimension of values should be equal to sum(row_lengths) "
)
data = []
z = 0
for length in row_lengths:
temp = []
for i in range(length):
temp.append(values[z, :])
z += 1
data.append(ivy.asarray(temp))
# data =[[values[0+i,:] for i in range(length)]
# for length in row_lengths]
return cls(values=values, row_partition=row_lengths, internal=True, data=data)
@classmethod
def from_value_rowids(
cls,
values,
value_rowids,
nrows=None,
name=None,
):
if not nrows:
nrows = value_rowids[-1] + 1
data = []
for row in range(nrows):
temp = []
for i in range(len(values)):
if value_rowids[i] == row:
temp.append(values[i, :])
data.append(ivy.asarray(temp))
# data= [[values[i,:] for i in range(len(values)) if value_rowids[i] == row]
# for row in range(nrows)]
return cls(values=values, row_partition=value_rowids, internal=True, data=data)
@classmethod
def from_row_starts(
cls,
values,
row_starts,
name=None,
):
# TODO since row_starts will be a tensor try appending using concat after
# ensuring row_starts
# is a tensor
row_starts.append(len(values))
return cls.from_row_splits(values, row_starts)
def to_list(self):
vals = []
for i in self:
if isinstance(i, RaggedTensor):
vals.append(i.to_list())
else:
vals.append(ivy.to_list(i))
return vals
@property
def values(self):
return self._values
@property
def flat_values(self):
values = self.values
while isinstance(values, RaggedTensor):
values = values.values
return values
@property
def row_splits(self):
return self._row_partition
@property
def nested_row_splits(self):
rt_nested_splits = [self.row_splits]
rt_values = self.values
while isinstance(rt_values, RaggedTensor):
rt_nested_splits.append(rt_values.row_splits)
rt_values = rt_values.values
return tuple(rt_nested_splits)
| ivy/ivy/functional/frontends/tensorflow/ragged/ragged.py/0 | {
"file_path": "ivy/ivy/functional/frontends/tensorflow/ragged/ragged.py",
"repo_id": "ivy",
"token_count": 2094
} | 43 |
# local
import ivy
from ivy.func_wrapper import with_unsupported_dtypes
from ivy.functional.frontends.torch.func_wrapper import (
to_ivy_arrays_and_back,
numpy_to_torch_style_args,
to_ivy_shape,
)
import ivy.functional.frontends.torch as torch_frontend
@to_ivy_arrays_and_back
def adjoint(input):
return ivy.adjoint(input)
@to_ivy_arrays_and_back
def argwhere(input):
return ivy.argwhere(input)
@numpy_to_torch_style_args
@to_ivy_arrays_and_back
def cat(tensors, dim=0, *, out=None):
return ivy.concat(tensors, axis=dim, out=out)
@to_ivy_arrays_and_back
def chunk(input, chunks, dim=0):
if ivy.shape(input) == ():
return [input]
else:
dim_size = ivy.shape(input)[dim]
chunk_size = dim_size // chunks
if chunk_size == 0:
return ivy.split(input, num_or_size_splits=dim_size, axis=dim)
else:
remainder = dim_size % chunks
if remainder == 0:
return ivy.split(input, num_or_size_splits=chunks, axis=dim)
else:
return ivy.split(
input,
num_or_size_splits=tuple(
[chunk_size + remainder] + [chunk_size] * (chunks - 1)
),
axis=dim,
)
@to_ivy_arrays_and_back
def column_stack(tensors, *, out=None):
reshaped_tensors = []
for t in tensors:
dim_num = ivy.get_num_dims(t, as_array=False)
if dim_num <= 1:
reshaped_tensor = ivy.reshape(t, (-1, 1))
else:
reshaped_tensor = t
reshaped_tensors.append(reshaped_tensor)
return ivy.hstack(reshaped_tensors, out=out)
@to_ivy_arrays_and_back
def concat(tensors, dim=0, *, out=None):
return ivy.concat(tensors, axis=dim, out=out)
@to_ivy_arrays_and_back
def conj(input):
return ivy.conj(input)
# diagonal_scatter
@with_unsupported_dtypes(
{
"2.2 and below": (
"bfloat16",
"float16",
)
},
"torch",
)
@to_ivy_arrays_and_back
def diagonal_scatter(input, src, offset=0, dim1=0, dim2=1):
input = ivy.copy_array(input)
input_shape = input.shape
indices = ivy.arange(0, input.size)
diagonal_indices = ivy.diagonal(
indices.reshape(input.shape), offset=offset, axis1=dim1, axis2=dim2
)
if src.shape != diagonal_indices.shape:
raise ivy.utils.exceptions.IvyException(
"src must have shape equal to specified diagonal of input. src size ="
f" {src.shape}, diagonal size = {diagonal_indices.shape}"
)
input = input.reshape((-1,))
input[diagonal_indices.reshape((-1,))] = src.reshape((-1,))
input = input.reshape(input_shape)
return input
@to_ivy_arrays_and_back
def dsplit(input, indices_or_sections, /):
if isinstance(indices_or_sections, (list, tuple, ivy.Array)):
indices_or_sections = (
ivy.diff(indices_or_sections, prepend=[0], append=[input.shape[2]])
.astype(ivy.int8)
.to_list()
)
return tuple(ivy.dsplit(input, indices_or_sections))
@to_ivy_arrays_and_back
def dstack(tensors, *, out=None):
return ivy.dstack(tensors, out=out)
@to_ivy_arrays_and_back
def gather(input, dim, index, *, sparse_grad=False, out=None):
if sparse_grad:
raise ivy.utils.exceptions.IvyException(
"Gather does not yet support the sparse grad functionality"
)
dim = dim % len(input.shape)
all_indices = ivy.argwhere(ivy.full(index.shape, True))
gather_locations = ivy.reshape(index, [ivy.prod(ivy.array(index.shape))])
gather_indices = []
for axis in range(len(index.shape)):
if axis == dim:
gather_indices.append(ivy.array(gather_locations, dtype=index.dtype))
else:
gather_indices.append(ivy.array(all_indices[:, axis], dtype=index.dtype))
gather_indices = ivy.stack(gather_indices, axis=-1)
gathered = ivy.gather_nd(input, gather_indices)
reshaped = ivy.reshape(gathered, index.shape)
return reshaped
@to_ivy_arrays_and_back
def hsplit(input, indices_or_sections=None, /):
if isinstance(indices_or_sections, (list, tuple, ivy.Array)):
if input.ndim == 1:
indices_or_sections = (
ivy.diff(indices_or_sections, prepend=[0], append=[input.shape[0]])
.astype(ivy.int8)
.to_list()
)
else:
indices_or_sections = (
ivy.diff(indices_or_sections, prepend=[0], append=[input.shape[1]])
.astype(ivy.int8)
.to_list()
)
return tuple(ivy.hsplit(input, indices_or_sections))
@to_ivy_arrays_and_back
def hstack(tensors, *, out=None):
return ivy.hstack(tensors, out=out)
@to_ivy_arrays_and_back
def index_add(input, dim, index, source, *, alpha=1, out=None):
input = ivy.swapaxes(input, dim, 0)
source = ivy.swapaxes(source, dim, 0)
_to_adds = []
index = sorted(zip(ivy.to_list(index), range(len(index))), key=(lambda x: x[0]))
while index:
_curr_idx = index[0][0]
while len(_to_adds) < _curr_idx:
_to_adds.append(ivy.zeros_like(source[0]))
_to_add_cum = ivy.get_item(source, index[0][1])
while (len(index) > 1) and (index[0][0] == index[1][0]):
_to_add_cum = _to_add_cum + ivy.get_item(source, index.pop(1)[1])
index.pop(0)
_to_adds.append(_to_add_cum)
while len(_to_adds) < input.shape[0]:
_to_adds.append(ivy.zeros_like(source[0]))
_to_adds = ivy.stack(_to_adds)
if len(input.shape) < 2:
# Added this line due to the paddle backend treating scalars as 1-d arrays
_to_adds = ivy.flatten(_to_adds)
ret = ivy.add(input, _to_adds, alpha=alpha)
ret = ivy.swapaxes(ret, 0, dim, out=out)
return ret
@to_ivy_arrays_and_back
def index_copy(input, dim, index, source, *, out=None):
input = ivy.swapaxes(input, dim, 0)
source = ivy.swapaxes(source, dim, 0)
index = sorted(zip(ivy.to_list(index), range(len(index))), key=(lambda x: x[0]))
res = []
while index:
_curr_idx = index[0][0]
for i in range(len(res), _curr_idx):
res.append(ivy.get_item(input, i))
while (len(index) > 1) and (index[0][0] == index[1][0]):
index.pop(0)
res.append(ivy.get_item(source, index[0][1]))
index.pop(0)
for i in range(len(res), input.shape[0]):
res.append(ivy.get_item(input, i))
res = ivy.stack(res)
if len(input.shape) < 2:
res = ivy.flatten(res)
return ivy.swapaxes(res, 0, dim, out=out)
@with_unsupported_dtypes(
{
"2.2 and below": (
"uint16",
"uint32",
"uint64",
"bfloat16",
"complex128",
"complex64",
)
},
"torch",
)
@to_ivy_arrays_and_back
def index_reduce(input, dim, index, source, reduce, *, include_self=True, out=None):
result = ivy.copy_array(input)
counts = (
ivy.ones_like(result, dtype=result.dtype)
if include_self
else ivy.zeros_like(result, dtype=result.dtype)
)
index = index.astype(ivy.int64)
def init_val(reduce):
if reduce == "prod":
return 1
elif reduce == "amax":
return -ivy.inf
elif reduce == "amin":
return ivy.inf
else:
return 0
if not include_self:
result[index, ...] = init_val(reduce)
numel = index.size
index_contig = ivy.copy_array(index)
def update_counts(reduce, counts, dim, input_index):
if reduce == "mean":
counts_slice = [slice(None)] * counts.ndim
counts_slice[dim] = input_index
counts[tuple(counts_slice)] += 1
return counts
def update_result(result, reduce, input_data, source_data):
if reduce == "prod":
return input_data * source_data
elif reduce == "amin":
return ivy.minimum(input_data, source_data)
elif reduce == "amax":
return ivy.maximum(input_data, source_data)
else:
return input_data + source_data
if result.ndim > 1:
for i in range(numel):
input_index = index_contig[i]
if not (0 <= input_index < result.shape[dim]):
raise IndexError("Index out of range in self")
input_data = ivy.gather(result, [input_index], axis=dim)
source_data = ivy.gather(source, [i], axis=dim)
result_slice = [slice(None)] * result.ndim
result_slice[dim] = input_index
update_data = update_result(result, reduce, input_data, source_data)
slide_shape = result[tuple(result_slice)].shape
result[tuple(result_slice)] = ivy.reshape(update_data, slide_shape)
counts = update_counts(reduce, counts, dim, input_index)
elif result.ndim == 1:
for i in range(numel):
input_index = index_contig[i]
if not (0 <= input_index < result.size):
raise IndexError("Index out of range in self")
input_data = ivy.flatten(result)[input_index]
source_data = ivy.flatten(source)[i]
result[input_index] = update_result(result, reduce, input_data, source_data)
counts[input_index] += 1
if reduce == "mean":
if ivy.any(counts == ivy.array(0)):
counts[counts == ivy.array(0)] = ivy.array(1)
result /= counts
if not input.is_float_dtype():
result = ivy.floor(result)
result = result.astype(input.dtype)
return result
@to_ivy_arrays_and_back
def index_select(input, dim, index, *, out=None):
return ivy.gather(input, index, axis=dim, out=out)
@to_ivy_arrays_and_back
def masked_select(input, mask, out=None):
return ivy.flatten(input[mask], out=out)
@to_ivy_arrays_and_back
def moveaxis(input, source, destination):
return ivy.moveaxis(input, source, destination)
@to_ivy_arrays_and_back
def movedim(input, source, destination):
return ivy.moveaxis(input, source, destination)
@to_ivy_arrays_and_back
def narrow(input, dim, start, length):
num_dims = ivy.get_num_dims(input)
slices = [slice(None)] * num_dims
slices[dim] = slice(start, start + length)
return input[tuple(slices)]
@to_ivy_arrays_and_back
def nonzero(input, *, out=None, as_tuple=False):
if as_tuple:
return ivy.nonzero(input, as_tuple=as_tuple)
return ivy.argwhere(input != 0, out=out)
@to_ivy_arrays_and_back
def permute(input, dims):
return ivy.permute_dims(input, axes=dims, copy=False)
@to_ivy_shape
@to_ivy_arrays_and_back
def reshape(input, shape):
return ivy.reshape(input, shape)
@to_ivy_arrays_and_back
def row_stack(tensors, *, out=None):
return ivy.vstack(tensors, out=out)
@to_ivy_arrays_and_back
def select(input, dim, index):
num_dims = ivy.get_num_dims(input)
slices = [slice(None)] * num_dims
slices[dim] = index
return input[tuple(slices)]
@to_ivy_arrays_and_back
def split(tensor, split_size_or_sections, dim=0):
if isinstance(split_size_or_sections, int):
split_size = split_size_or_sections
split_size_or_sections = [split_size] * (tensor.shape[dim] // split_size)
if tensor.shape[dim] % split_size:
split_size_or_sections.append(tensor.shape[dim] % split_size)
return tuple(
ivy.split(
tensor,
num_or_size_splits=split_size_or_sections,
axis=dim,
with_remainder=True,
)
)
@numpy_to_torch_style_args
@to_ivy_arrays_and_back
def squeeze(input, dim=None):
if isinstance(dim, int) and input.ndim > 0:
if input.shape[dim] > 1:
return input
return ivy.squeeze(input, axis=dim)
@to_ivy_arrays_and_back
def stack(tensors, dim=0, *, out=None):
return ivy.stack(tensors, axis=dim, out=out)
@to_ivy_arrays_and_back
def swapaxes(input, axis0, axis1):
return ivy.swapaxes(input, axis0, axis1)
@to_ivy_arrays_and_back
def swapdims(input, dim0, dim1):
return ivy.swapaxes(input, dim0, dim1)
@to_ivy_arrays_and_back
def t(input):
if input.ndim > 2:
raise ivy.utils.exceptions.IvyException(
f"t(input) expects a tensor with <= 2 dimensions, but self is {input.ndim}D"
)
if input.ndim == 2:
return ivy.swapaxes(input, 0, 1)
else:
return input
@to_ivy_arrays_and_back
def take(input, index):
input = ivy.reshape(input, (-1,))
return ivy.gather(input, index, axis=0)
@to_ivy_arrays_and_back
def take_along_dim(input, indices, dim, *, out=None):
return ivy.take_along_axis(input, indices, dim, out=out)
@to_ivy_arrays_and_back
def tensor_split(input, indices_or_sections, dim=0):
if isinstance(indices_or_sections, (list, tuple, ivy.Array)):
indices_or_sections = (
ivy.diff(indices_or_sections, prepend=[0], append=[input.shape[dim]])
.astype(ivy.int8)
.to_list()
)
return ivy.split(
input, num_or_size_splits=indices_or_sections, axis=dim, with_remainder=True
)
@to_ivy_arrays_and_back
def tile(input, dims):
try:
tup = tuple(dims)
except TypeError:
tup = (dims,)
d = len(tup)
res = 0
if len(input.shape) > len([dims]) - 1:
res = input
if d < input.ndim:
tup = (1,) * (input.ndim - d) + tup
res = ivy.tile(input, tup)
else:
res = ivy.tile(input, repeats=dims, out=None)
return res
@to_ivy_arrays_and_back
def transpose(input, dim0, dim1):
return ivy.swapaxes(input, dim0, dim1)
@to_ivy_arrays_and_back
def unbind(input, dim=0):
shape = list(input.shape)
shape.pop(dim)
return tuple([x.reshape(tuple(shape)) for x in split(input, 1, dim=dim)])
@to_ivy_arrays_and_back
def unsqueeze(input, dim=0):
return ivy.expand_dims(input, axis=dim)
@to_ivy_arrays_and_back
def vsplit(input, indices_or_sections=None, /):
if isinstance(indices_or_sections, (list, tuple, ivy.Array)):
indices_or_sections = (
ivy.diff(indices_or_sections, prepend=[0], append=[input.shape[0]])
.astype(ivy.int8)
.to_list()
)
return tuple(ivy.vsplit(input, indices_or_sections))
@to_ivy_arrays_and_back
def vstack(tensors, *, out=None):
return ivy.vstack(tensors, out=out)
@to_ivy_arrays_and_back
def where(condition, input=None, other=None):
if not ivy.exists(input) and not ivy.exists(other):
return nonzero(condition, as_tuple=True)
input, other = torch_frontend.promote_types_of_torch_inputs(input, other)
return ivy.where(condition, input, other)
| ivy/ivy/functional/frontends/torch/indexing_slicing_joining_mutating_ops.py/0 | {
"file_path": "ivy/ivy/functional/frontends/torch/indexing_slicing_joining_mutating_ops.py",
"repo_id": "ivy",
"token_count": 7123
} | 44 |
# global
# local
import ivy
from ivy import with_unsupported_dtypes, with_supported_dtypes
from ivy.functional.frontends.torch.func_wrapper import to_ivy_arrays_and_back
# --- Helpers --- #
# --------------- #
def _handle_padding_shape(padding, n, mode):
padding = tuple(
[
(padding[i * 2], padding[i * 2 + 1])
for i in range(int(len(padding) / 2) - 1, -1, -1)
]
)
if mode == "circular":
padding = padding + ((0, 0),) * (n - len(padding))
else:
padding = ((0, 0),) * (n - len(padding)) + padding
if mode == "circular":
padding = tuple(list(padding)[::-1])
return padding
# --- Main --- #
# ------------ #
@with_unsupported_dtypes({"2.2 and below": ("float16", "bfloat16")}, "torch")
@to_ivy_arrays_and_back
def affine_grid(theta, size, align_corners=False):
if len(size) == 4:
N, C, H, W = size
base_grid = ivy.empty((N, H, W, 3))
if align_corners:
base_grid[:, :, :, 0] = ivy.linspace(-1, 1, W)
base_grid[:, :, :, 1] = ivy.expand_dims(ivy.linspace(-1, 1, H), axis=-1)
base_grid[:, :, :, 2] = ivy.full((H, W), 1)
grid = ivy.matmul(base_grid.view((N, H * W, 3)), theta.swapaxes(1, 2))
return grid.view((N, H, W, 2))
else:
base_grid[:, :, :, 0] = ivy.linspace(-1, 1, W) * (W - 1) / W
base_grid[:, :, :, 1] = ivy.expand_dims(
ivy.linspace(-1, 1, H) * (H - 1) / H, axis=-1
)
base_grid[:, :, :, 2] = ivy.full((H, W), 1)
grid = ivy.matmul(base_grid.view((N, H * W, 3)), ivy.swapaxes(theta, 1, 2))
return grid.view((N, H, W, 2))
else:
N, C, D, H, W = size
base_grid = ivy.empty((N, D, H, W, 4))
if align_corners:
base_grid[:, :, :, :, 0] = ivy.linspace(-1, 1, W)
base_grid[:, :, :, :, 1] = ivy.expand_dims(ivy.linspace(-1, 1, H), axis=-1)
base_grid[:, :, :, :, 2] = ivy.expand_dims(
ivy.expand_dims(ivy.linspace(-1, 1, D), axis=-1), axis=-1
)
base_grid[:, :, :, :, 3] = ivy.full((D, H, W), 1)
grid = ivy.matmul(base_grid.view((N, D * H * W, 4)), theta.swapaxes(1, 2))
return grid.view((N, D, H, W, 3))
else:
base_grid[:, :, :, :, 0] = ivy.linspace(-1, 1, W) * (W - 1) / W
base_grid[:, :, :, :, 1] = ivy.expand_dims(
ivy.linspace(-1, 1, H) * (H - 1) / H, axis=-1
)
base_grid[:, :, :, :, 2] = ivy.expand_dims(
ivy.expand_dims(ivy.linspace(-1, 1, D) * (D - 1) / D, axis=-1), axis=-1
)
base_grid[:, :, :, :, 3] = ivy.full((D, H, W), 1)
grid = ivy.matmul(base_grid.view((N, D * H * W, 4)), theta.swapaxes(1, 2))
return grid.view((N, D, H, W, 3))
def bicubic_interp(x, t, alpha=-0.75):
n, h, w = t.shape
coeffs = []
coeffs.append(ivy.reshape(cubic_conv2(alpha, t + 1), (n, 1, h, w)))
coeffs.append(ivy.reshape(cubic_conv1(alpha, t), (n, 1, h, w)))
coeffs.append(ivy.reshape(cubic_conv1(alpha, 1 - t), (n, 1, h, w)))
coeffs.append(ivy.reshape(cubic_conv2(alpha, 2 - t), (n, 1, h, w)))
return x[0] * coeffs[0] + x[1] * coeffs[1] + x[2] * coeffs[2] + x[3] * coeffs[3]
def cubic_conv1(A, x):
return ((A + 2) * x - (A + 3)) * x * x + 1
def cubic_conv2(A, x):
return ((A * x - 5 * A) * x + 8 * A) * x - 4 * A
@with_supported_dtypes({"2.2 and below": ("float32", "float64")}, "torch")
@to_ivy_arrays_and_back
def grid_sample(
input, grid, mode="bilinear", padding_mode="zeros", align_corners=False
):
input_clone = ivy.copy_array(input)
grid_clone = ivy.copy_array(grid)
if ivy.get_num_dims(input_clone) == 4: # sample from 2D images
n, c, h, w = input_clone.shape
n, to_h, to_w, gc = grid_clone.shape
# Un-normalize 2D grid
if align_corners: # to range[0, size - 1]
grid_clone[..., 0] = ((grid_clone[..., 0] + 1) / 2) * (w - 1)
grid_clone[..., 1] = ((grid_clone[..., 1] + 1) / 2) * (h - 1)
elif not align_corners: # to range[0.5, size - 0.5]
grid_clone[..., 0] = ((grid_clone[..., 0] + 1) * w - 1) / 2
grid_clone[..., 1] = ((grid_clone[..., 1] + 1) * h - 1) / 2
batch_coor = ivy.reshape(ivy.arange(n), (-1, 1))
batch_coor = ivy.repeat(batch_coor, to_h * to_w, axis=1)
batch_coor = ivy.reshape(batch_coor, (n, to_h, to_w))
padding = [(0, 0) for _ in range(2)] + [(4, 4) for _ in range(2)]
input_clone = ivy.pad(input_clone, padding, mode="constant", constant_values=0)
if mode == "bicubic":
grid_floor = ivy.floor(grid_clone)
distance = grid_clone - grid_floor
tx, ty = distance[..., 0], distance[..., 1]
grid_floor -= 1
grid_floor = [
grid_sample_padding(
grid_floor + i, padding_mode, align_corners, borders=[w, h]
)
for i in range(4)
]
w_cubic = [
ivy.astype(grid_floor[i][..., 0] + 4, ivy.int64) for i in range(4)
]
h_cubic = [
ivy.astype(grid_floor[i][..., 1] + 4, ivy.int64) for i in range(4)
]
coeffs = [
bicubic_interp(
[
ivy.permute_dims(
input_clone[batch_coor, :, h_cubic[i], w_cubic[0]],
(0, 3, 1, 2),
),
ivy.permute_dims(
input_clone[batch_coor, :, h_cubic[i], w_cubic[1]],
(0, 3, 1, 2),
),
ivy.permute_dims(
input_clone[batch_coor, :, h_cubic[i], w_cubic[2]],
(0, 3, 1, 2),
),
ivy.permute_dims(
input_clone[batch_coor, :, h_cubic[i], w_cubic[3]],
(0, 3, 1, 2),
),
],
tx,
)
for i in range(4)
]
return bicubic_interp(coeffs, ty)
else:
grid_clone = grid_sample_padding(
grid_clone, padding_mode, align_corners, borders=[w, h]
)
if mode == "bilinear":
grid_clone += 4
w_coor = ivy.reshape(grid_clone[..., 0], (n, to_h, to_w))
h_coor = ivy.reshape(grid_clone[..., 1], (n, to_h, to_w))
w0 = ivy.astype(ivy.floor(w_coor), ivy.int64)
h0 = ivy.astype(ivy.floor(h_coor), ivy.int64)
w1 = w0 + 1
h1 = h0 + 1
v00 = ivy.permute_dims(input_clone[batch_coor, :, h0, w0], (0, 3, 1, 2))
v01 = ivy.permute_dims(input_clone[batch_coor, :, h0, w1], (0, 3, 1, 2))
v10 = ivy.permute_dims(input_clone[batch_coor, :, h1, w0], (0, 3, 1, 2))
v11 = ivy.permute_dims(input_clone[batch_coor, :, h1, w1], (0, 3, 1, 2))
alpha = ivy.reshape(w_coor - w0, (n, 1, to_h, to_w))
beta = ivy.reshape(h_coor - h0, (n, 1, to_h, to_w))
alpha = ivy.astype(alpha, ivy.float32)
beta = ivy.astype(beta, ivy.float32)
v0 = v00 * (1 - alpha) + v01 * alpha
v1 = v10 * (1 - alpha) + v11 * alpha
return v0 * (1 - beta) + v1 * beta
elif mode == "nearest":
w_coor = ivy.reshape(grid_clone[..., 0], (n, to_h, to_w))
h_coor = ivy.reshape(grid_clone[..., 1], (n, to_h, to_w))
w_coor = ivy.astype(ivy.round(w_coor), ivy.int64) + 4
h_coor = ivy.astype(ivy.round(h_coor), ivy.int64) + 4
return ivy.permute_dims(
input_clone[batch_coor, :, h_coor, w_coor], (0, 3, 1, 2)
)
else:
raise ivy.exceptions.IvyError(f"Not supported mode {mode}")
elif ivy.get_num_dims(input_clone) == 5: # sample from 3D images
n, c, d, h, w = input_clone.shape
n, to_d, to_h, to_w, gc = grid_clone.shape
# Un-normalize 3D grid
if align_corners: # to range[0, size - 1]
grid_clone[..., 0] = ((grid_clone[..., 0] + 1) / 2) * (w - 1)
grid_clone[..., 1] = ((grid_clone[..., 1] + 1) / 2) * (h - 1)
grid_clone[..., 2] = ((grid_clone[..., 2] + 1) / 2) * (d - 1)
elif not align_corners: # to range[0.5, size - 0.5]
grid_clone[..., 0] = ((grid_clone[..., 0] + 1) * w - 1) / 2
grid_clone[..., 1] = ((grid_clone[..., 1] + 1) * h - 1) / 2
grid_clone[..., 2] = ((grid_clone[..., 2] + 1) * d - 1) / 2
batch_coor = ivy.reshape(ivy.arange(n), (-1, 1))
batch_coor = ivy.repeat(batch_coor, to_d * to_h * to_w, axis=1)
batch_coor = ivy.reshape(batch_coor, (n, to_d, to_h, to_w))
padding = [(0, 0) for _ in range(2)] + [(3, 3) for _ in range(3)]
input_clone = ivy.pad(input_clone, padding, mode="constant", constant_values=0)
grid_clone = grid_sample_padding(
grid_clone, padding_mode, align_corners, borders=[w, h, d]
)
if mode == "bilinear":
grid_clone += 3
w_coor = ivy.reshape(grid_clone[..., 0], (n, to_d, to_h, to_w))
h_coor = ivy.reshape(grid_clone[..., 1], (n, to_d, to_h, to_w))
d_coor = ivy.reshape(grid_clone[..., 2], (n, to_d, to_h, to_w))
w0 = ivy.astype(ivy.floor(w_coor), ivy.int64)
h0 = ivy.astype(ivy.floor(h_coor), ivy.int64)
d0 = ivy.astype(ivy.floor(d_coor), ivy.int64)
w1 = w0 + 1
h1 = h0 + 1
d1 = d0 + 1
v000 = ivy.permute_dims(
input_clone[batch_coor, :, d0, h0, w0], (0, 4, 1, 2, 3)
) # tnw
v001 = ivy.permute_dims(
input_clone[batch_coor, :, d0, h0, w1], (0, 4, 1, 2, 3)
) # tne
v010 = ivy.permute_dims(
input_clone[batch_coor, :, d0, h1, w0], (0, 4, 1, 2, 3)
) # tsw
v011 = ivy.permute_dims(
input_clone[batch_coor, :, d0, h1, w1], (0, 4, 1, 2, 3)
) # tse
v100 = ivy.permute_dims(
input_clone[batch_coor, :, d1, h0, w0], (0, 4, 1, 2, 3)
) # bnw
v101 = ivy.permute_dims(
input_clone[batch_coor, :, d1, h0, w1], (0, 4, 1, 2, 3)
) # bne
v110 = ivy.permute_dims(
input_clone[batch_coor, :, d1, h1, w0], (0, 4, 1, 2, 3)
) # bsw
v111 = ivy.permute_dims(
input_clone[batch_coor, :, d1, h1, w1], (0, 4, 1, 2, 3)
) # bse
alpha = ivy.reshape(w_coor - w0, (n, 1, to_d, to_h, to_w))
beta = ivy.reshape(h_coor - h0, (n, 1, to_d, to_h, to_w))
gamma = ivy.reshape(d_coor - d0, (n, 1, to_d, to_h, to_w))
alpha = ivy.astype(alpha, ivy.float32)
beta = ivy.astype(beta, ivy.float32)
gamma = ivy.astype(gamma, ivy.float32)
v = (alpha * beta * gamma) * v111
v += ((1 - alpha) * beta * gamma) * v110
v += (alpha * (1 - beta) * gamma) * v101
v += ((1 - alpha) * (1 - beta) * gamma) * v100
v += (alpha * beta * (1 - gamma)) * v011
v += ((1 - alpha) * beta * (1 - gamma)) * v010
v += (alpha * (1 - beta) * (1 - gamma)) * v001
v += ((1 - alpha) * (1 - beta) * (1 - gamma)) * v000
return v
elif mode == "nearest":
ceil_mask = grid_clone % 1 == 0.5
grid_clone[ceil_mask] = ivy.astype(
ivy.ceil(grid_clone[ceil_mask]), ivy.int64
)
w_coor = ivy.reshape(grid_clone[..., 0], (n, to_d, to_h, to_w))
h_coor = ivy.reshape(grid_clone[..., 1], (n, to_d, to_h, to_w))
d_coor = ivy.reshape(grid_clone[..., 2], (n, to_d, to_h, to_w))
w_coor = ivy.astype(ivy.round(w_coor), ivy.int64) + 3
h_coor = ivy.astype(ivy.round(h_coor), ivy.int64) + 3
d_coor = ivy.astype(ivy.round(d_coor), ivy.int64) + 3
return ivy.permute_dims(
input_clone[batch_coor, :, d_coor, h_coor, w_coor], (0, 4, 1, 2, 3)
)
elif mode == "bicubic":
raise ivy.exceptions.IvyError("Bicubic is not support in 3D grid sampling")
else:
raise ivy.exceptions.IvyError(f"Not supported input shape {input_clone.shape}")
def grid_sample_padding(grid, padding_mode, align_corners, borders=None):
if padding_mode == "reflection":
if align_corners:
for idx, border in enumerate(borders):
grid[..., idx] = reflect(grid[..., idx], 0, 2 * (border - 1))
grid[..., idx] = ivy.clip(grid[..., idx], 0, border - 1)
else:
for idx, border in enumerate(borders):
grid[..., idx] = reflect(grid[..., idx], -1, 2 * border - 1)
grid[..., idx] = ivy.clip(grid[..., idx], 0, border - 1)
elif padding_mode == "border":
for idx, border in enumerate(borders):
grid[..., idx] = ivy.clip(grid[..., idx], 0, border - 1)
masks = []
for idx, border in enumerate(borders):
masks.append(ivy.bitwise_or(grid[..., idx] < -4, grid[..., idx] > border + 2))
borders[idx] += 1
zeros_mask = masks[0]
for i in range(1, len(borders)):
zeros_mask = ivy.bitwise_or(zeros_mask, masks[i])
if grid[zeros_mask].shape[0] > 0:
grid[zeros_mask] = ivy.array(borders)
return grid
@with_unsupported_dtypes(
{
"2.2 and below": (
"bfloat16",
"float16",
)
},
"torch",
)
@to_ivy_arrays_and_back
def interpolate(
input,
size=None,
scale_factor=None,
mode="nearest",
align_corners=None,
recompute_scale_factor=None,
antialias=False,
):
if (
mode not in ["linear", "bilinear", "bicubic", "trilinear"]
and align_corners is not None
):
raise ivy.utils.exceptions.IvyException(
"align_corners option can only be set with the interpolating"
f"modes: linear | bilinear | bicubic | trilinear (got {mode})"
)
ivy.utils.assertions.check_elem_in_list(
ivy.get_num_dims(input),
range(3, 6),
message=(
"Input Error: Only 3D, 4D and 5D input Tensors supported (got"
f" {ivy.get_num_dims(input)}D) for the modes: nearest | linear | bilinear |"
f" bicubic | trilinear | area | nearest-exact (got {mode})"
),
)
return ivy.interpolate(
input,
size,
mode=mode,
scale_factor=scale_factor,
recompute_scale_factor=recompute_scale_factor,
align_corners=bool(align_corners),
antialias=antialias,
)
@to_ivy_arrays_and_back
def pad(input, pad, mode="constant", value=0):
value = 0 if value is None else value
mode_dict = {
"constant": "constant",
"reflect": "reflect",
"replicate": "edge",
"circular": "wrap",
}
if mode not in mode_dict:
raise ValueError(f"Unsupported padding mode: {mode}")
pad = _handle_padding_shape(pad, len(input.shape), mode)
return ivy.pad(input, pad, mode=mode_dict[mode], constant_values=value)
@to_ivy_arrays_and_back
def pixel_shuffle(input, upscale_factor):
input_shape = ivy.shape(input)
ivy.utils.assertions.check_equal(
ivy.get_num_dims(input),
4,
message=(
"pixel_shuffle expects 4D input, but got input with sizes"
f" {str(input_shape)}"
),
as_array=False,
)
b = input_shape[0]
c = input_shape[1]
h = input_shape[2]
w = input_shape[3]
upscale_factor_squared = upscale_factor * upscale_factor
ivy.utils.assertions.check_equal(
c % upscale_factor_squared,
0,
message="pixel_shuffle expects input channel to be divisible by square "
+ "of upscale_factor, but got input with sizes "
+ str(input_shape)
+ ", upscale_factor="
+ str(upscale_factor)
+ ", and self.size(1)="
+ str(c)
+ " is not divisible by "
+ str(upscale_factor_squared),
as_array=False,
)
oc = int(c / upscale_factor_squared)
oh = h * upscale_factor
ow = w * upscale_factor
input_reshaped = ivy.reshape(input, (b, oc, upscale_factor, upscale_factor, h, w))
return ivy.reshape(
ivy.permute_dims(input_reshaped, (0, 1, 4, 2, 5, 3)), (b, oc, oh, ow)
)
@to_ivy_arrays_and_back
def pixel_unshuffle(input, downscale_factor):
input_shape = ivy.shape(input)
ivy.utils.assertions.check_equal(
ivy.get_num_dims(input),
4,
message=(
f"pixel_unshuffle expects 4D input, but got input with sizes {input_shape}"
),
as_array=False,
)
b = input_shape[0]
c = input_shape[1]
h = input_shape[2]
w = input_shape[3]
downscale_factor_squared = downscale_factor * downscale_factor
ivy.utils.assertions.check_equal(
[h % downscale_factor, w % downscale_factor],
[0, 0], # Assert h % downscale_factor == 0 and w % downscale_factor == 0
message=(
"pixel_unshuffle expects input height and width to be divisible by "
f"downscale_factor, but got input with sizes {input_shape}"
f", downscale_factor= {downscale_factor}"
f", and either self.size(2)= {h}"
f" or self.size(3)= {w}"
f" is not divisible by {downscale_factor}"
),
as_array=False,
)
oc = c * downscale_factor_squared
oh = int(h / downscale_factor)
ow = int(w / downscale_factor)
input_reshaped = ivy.reshape(
input, (b, c, oh, downscale_factor, ow, downscale_factor)
)
return ivy.reshape(
ivy.permute_dims(input_reshaped, (0, 1, 3, 5, 2, 4)), (b, oc, oh, ow)
)
def reflect(x, low2, high2):
min = low2 / 2
span = (high2 - low2) / 2
x = ivy.abs(x - min)
frac_in = ivy.abs(x / span)
extra = (frac_in - ivy.floor(frac_in)) * ivy.abs(span)
flips = ivy.floor(x / span)
x[flips % 2 == 0] = (extra + min)[flips % 2 == 0]
x[flips % 2 != 0] = (span - extra + min)[flips % 2 != 0]
return x
@with_unsupported_dtypes({"2.2 and below": ("float16", "bfloat16")}, "torch")
@to_ivy_arrays_and_back
def upsample(
input,
size=None,
scale_factor=None,
mode="nearest",
align_corners=None,
):
return interpolate(
input,
size=size,
scale_factor=scale_factor,
mode=mode,
align_corners=align_corners,
)
@with_unsupported_dtypes({"2.2 and below": ("float16", "bfloat16")}, "torch")
@to_ivy_arrays_and_back
def upsample_bilinear(input, size=None, scale_factor=None):
return interpolate(
input, size=size, scale_factor=scale_factor, mode="bilinear", align_corners=True
)
@with_unsupported_dtypes({"2.2 and below": ("float16", "bfloat16")}, "torch")
@to_ivy_arrays_and_back
def upsample_nearest(input, size=None, scale_factor=None):
return interpolate(input, size=size, scale_factor=scale_factor, mode="nearest")
| ivy/ivy/functional/frontends/torch/nn/functional/vision_functions.py/0 | {
"file_path": "ivy/ivy/functional/frontends/torch/nn/functional/vision_functions.py",
"repo_id": "ivy",
"token_count": 10626
} | 45 |
import ivy
from ivy.func_wrapper import with_unsupported_dtypes
from .gbm import GBLinear
class DMatrix:
def __init__(
self,
data,
label=None,
*,
weight=None,
base_margin=None,
missing=None,
silent=False,
feature_names=None,
feature_types=None,
nthread=None,
group=None,
qid=None,
label_lower_bound=None,
label_upper_bound=None,
feature_weights=None,
enable_categorical=False,
):
self.data = ivy.array(data) if not isinstance(data, ivy.Array) else data
self.label = (
ivy.array(label) if (not isinstance(label, ivy.Array) and label) else label
)
self.weight = (
(
ivy.array(weight)
if (not isinstance(weight, ivy.Array) and weight)
else weight
),
)
self.base_margin = (
(
ivy.array(base_margin)
if (not isinstance(base_margin, ivy.Array) and base_margin)
else base_margin
),
)
self.missing = (missing,)
self.silent = (silent,)
self.feature_names = (feature_names,)
self.feature_types = (feature_types,)
self.nthread = (nthread,)
self.group = (
ivy.array(group) if (not isinstance(group, ivy.Array) and group) else group,
)
self.qid = (
ivy.array(qid) if (not isinstance(qid, ivy.Array) and qid) else qid,
)
self.label_lower_bound = (
(
ivy.array(label_lower_bound)
if (not isinstance(label_lower_bound, ivy.Array) and label_lower_bound)
else label_lower_bound
),
)
self.label_upper_bound = (
(
ivy.array(label_upper_bound)
if (not isinstance(label_upper_bound, ivy.Array) and label_upper_bound)
else label_upper_bound
),
)
self.feature_weights = (
(
ivy.array(feature_weights)
if (not isinstance(feature_weights, ivy.Array) and feature_weights)
else feature_weights
),
)
self.enable_categorical = enable_categorical
@with_unsupported_dtypes(
{"1.7.6 and below": ("bfloat16", "complex64", "complex128")}, "xgboost"
)
def num_row(self):
return ivy.shape(self.data)[0]
@with_unsupported_dtypes(
{"1.7.6 and below": ("bfloat16", "complex64", "complex128")}, "xgboost"
)
def num_col(self):
return ivy.shape(self.data)[1]
class Booster:
def __init__(self, params=None, cache=None, model_file=None, compile=False):
# cache[0] refers to input data while cache[1] refers to input target
n_feat = cache[0].shape[1]
n_inst = cache[0].shape[0]
n_output_group = ivy.unique_values(cache[1]).shape[0]
# by default xgboost calculates the mean of a target if base_score is not
# provided
params["base_score"] = (
cache[1].mean() if not params["base_score"] else params["base_score"]
)
# add num_feature, num_target and num_instances to params
params.update(
{
"num_feature": n_feat,
"num_output_group": n_output_group - 1,
"num_instances": n_inst,
}
)
# create gbm(as for now only gblinear booster is available)
self.gbm = GBLinear(params, compile=compile, cache=cache)
self.compile = compile
if self.compile:
self._comp_binary_prediction = ivy.trace_graph(
_binary_prediction, backend_compile=True, static_argnums=(0,)
)
# invoke function to get its compiled version
self._comp_binary_prediction(self.gbm.obj, cache[1])
def update(self, dtrain, dlabel, iteration, fobj=None):
"""Update for one iteration, with objective function calculated
internally. This function should not be called directly by users.
Parameters
----------
dtrain
Training data.
dlabel
Training labels.
iteration
Number of current iteration.
fobj
Custom objective.
"""
# ToDo: add support for custom objective
pred = self.gbm.pred(dtrain)
gpair = self.gbm.get_gradient(pred, dlabel)
self.gbm.do_boost(dtrain, gpair, iteration)
def predict(
self,
data,
output_margin=False,
pred_leaf=False,
pred_contribs=False,
approx_contribs=False,
pred_interactions=False,
validate_features=True,
training=False,
iteration_range=(0, 0),
strict_shape=False,
):
"""Predict with data. The full model will be used unless
`iteration_range` is specified, meaning user have to either slice the
model or use the ``best_iteration`` attribute to get prediction from
best model returned from early stopping.
Parameters
----------
data
The array storing the input.
output_margin
Whether to output the raw untransformed margin value.
pred_leaf
When this option is on, the output will be a matrix of (nsample,
ntrees) with each record indicating the predicted leaf index of
each sample in each tree. Note that the leaf index of a tree is
unique per tree, so you may find leaf 1 in both tree 1 and tree 0.
pred_contribs
When this is True the output will be a matrix of size (nsample,
nfeats + 1) with each record indicating the feature contributions
(SHAP values) for that prediction. The sum of all feature
contributions is equal to the raw untransformed margin value of the
prediction. Note the final column is the bias term.
approx_contribs
Approximate the contributions of each feature. Used when ``pred_contribs``
or ``pred_interactions`` is set to True. Changing the default of this
parameter (False) is not recommended.
pred_interactions
When this is True the output will be a matrix of size (nsample,
nfeats + 1, nfeats + 1) indicating the SHAP interaction values for
each pair of features. The sum of each row (or column) of the
interaction values equals the corresponding SHAP value (from
pred_contribs), and the sum of the entire matrix equals the raw
untransformed margin value of the prediction. Note the last row and
column correspond to the bias term.
validate_features
When this is True, validate that the Booster's and data's
feature_names are identical. Otherwise, it is assumed that the
feature_names are the same.
training
Whether the prediction value is used for training. This can effect `dart`
booster, which performs dropouts during training iterations but use all
trees for inference. If you want to obtain result with dropouts, set this
parameter to `True`. Also, the parameter is set to true when obtaining
prediction for custom objective function.
iteration_range
Specifies which layer of trees are used in prediction. For example, if a
random forest is trained with 100 rounds. Specifying `iteration_range=(10,
20)`, then only the forests built during [10, 20) (half open set) rounds are
used in this prediction. Unsupported for gblinear booster.
strict_shape
When set to True, output shape is invariant to whether classification is
used.
For both value and margin prediction, the output shape is (n_samples,
n_groups), n_groups == 1 when multi-class is not used. Default to False, in
which case the output shape can be (n_samples, ) if multi-class is not used.
Returns
-------
prediction : ivy array
"""
# currently supports prediction for binary task
# get raw predictions
pred = self.gbm.pred(data)
args = (self.gbm.obj, pred)
if self.compile:
return self._comp_binary_prediction(*args)
else:
return _binary_prediction(*args)
# --- Helpers --- #
# --------------- #
def _binary_prediction(obj, raw_pred):
# apply activation function
pred = obj.pred_transform(raw_pred)
# apply probability thresholding
return ivy.where(pred >= 0.5, 1.0, 0.0)
| ivy/ivy/functional/frontends/xgboost/core.py/0 | {
"file_path": "ivy/ivy/functional/frontends/xgboost/core.py",
"repo_id": "ivy",
"token_count": 3928
} | 46 |
"""Collection of device Ivy functions."""
# global
import os
import gc
import abc
import math
import psutil
import warnings
import types
from typing import Type, Optional, Tuple
# noinspection PyUnresolvedReferences
try:
import pynvml
try:
pynvml.nvmlInit()
except pynvml.NVMLError:
pass
except ImportError:
warnings.warn(
"pynvml installation was not found in the environment, functionalities"
" of the Ivy's device module will be limited. Please install pynvml if"
" you wish to use GPUs with Ivy."
)
# nvidia-ml-py (pynvml) is not installed in CPU Dockerfile.
from typing import Union, Callable, Iterable, Any
# local
import ivy
from ivy.func_wrapper import (
handle_out_argument,
to_native_arrays_and_back,
inputs_to_native_arrays,
handle_nestable,
handle_array_like_without_promotion,
handle_backend_invalid,
)
from ivy.utils.exceptions import handle_exceptions
default_device_stack = []
soft_device_mode_stack = []
dev_handles = {}
split_factors = {}
max_chunk_sizes = {}
# Extra #
# ------#
class DefaultDevice:
"""Ivy Device Class."""
def __init__(
self,
device: Union[ivy.Device, ivy.NativeDevice],
/,
) -> None:
"""Initialize the DefaultDevice class.
Parameters
----------
device
The device string - as an ivy device or nativedevice class
Examples
--------
A "tpu" as device:
>>> x = ivy.DefaultDevice("tpu")
"""
self._dev = device
def __enter__(self):
"""Enter the runtime context related to the specified device.
Returns
-------
ret
Self, an instance of the same class.
Examples
--------
A "cpu" as device:
>>> with ivy.DefaultDevice("cpu") as device:
>>> # with block calls device.__enter__()
>>> print(device._dev)
"cpu"
"""
ivy.set_default_device(self._dev)
ivy.set_soft_device_mode(True)
return self
def __exit__(
self,
exc_type: Optional[Type[BaseException]],
exc_val: Optional[Type[BaseException]],
exc_tb: Optional[types.TracebackType],
) -> Union[ivy.Device, str]:
"""Exit the runtime context related to the specified device.
Parameters
----------
exc_type
The type of the exception that was raised.
exc_val
The exception that was raised.
exc_tb
The traceback of the exception that was raised.
Returns
-------
ret
If no exception was raised, returns an instance of the same class.
Examples
--------
A "gpu" as device:
>>> with ivy.DefaultDevice("gpu") as device:
>>> pass
>>> # after with block device.__exit__() is called
>>> print(device._dev)
"cpu"
"""
ivy.unset_default_device()
ivy.unset_soft_device_mode()
if self and (exc_type is not None):
raise exc_val
return self
def handle_soft_device_variable(*args, fn, **kwargs):
return ivy.current_backend().handle_soft_device_variable(*args, fn=fn, **kwargs)
# Helpers #
def _get_nvml_gpu_handle(device: Union[ivy.Device, ivy.NativeDevice], /) -> int:
global dev_handles
if device in dev_handles:
return dev_handles[device]
gpu_idx = int(device.split(":")[-1])
handle = pynvml.nvmlDeviceGetHandleByIndex(gpu_idx)
dev_handles[device] = handle
return handle
def _shift_native_arrays_on_default_device(*args, **kwargs):
with ivy.ArrayMode(False):
default_device = ivy.default_device()
args, kwargs = ivy.nested_map(
lambda x: (
ivy.to_device(x, default_device)
if (ivy.is_native_array(x) and ivy.dev(x) != default_device)
else x
),
[args, kwargs],
)
return args, kwargs, ivy.as_native_dev(default_device)
# Device Queries #
# Array Printing
@handle_exceptions
def get_all_ivy_arrays_on_dev(
device: Union[ivy.Device, ivy.NativeDevice],
/,
) -> ivy.Container:
"""Get all ivy arrays which are currently alive on the specified device.
Parameters
----------
device
The device handle from which to get the arrays
Returns
-------
ret
Container with the arrays found for the specified device [identity, array]
Examples
--------
>>> x = ivy.array([1,0,2])
>>> y = ivy.dev(x)
>>> z = ivy.get_all_ivy_arrays_on_dev(y)
>>> print(z)
{139740789224448:ivy.array([1,0,2])},
"""
device = ivy.as_ivy_dev(device)
all_arrays = []
for obj in gc.get_objects():
if (
obj is ivy.data_classes.array.array.Array
and ivy.is_ivy_array(obj)
and ivy.dev(obj) == device
):
all_arrays.append(obj)
return ivy.Container(dict(zip([str(id(a)) for a in all_arrays], all_arrays)))
@handle_exceptions
def num_ivy_arrays_on_dev(device: Union[ivy.Device, ivy.NativeDevice], /) -> int:
"""Return the number of arrays which are currently alive on the specified
device.
Parameters
----------
device
The device handle from which to count the arrays
Returns
-------
ret
Number of arrays on the specified device
Examples
--------
>>> x1 = ivy.array([-1, 0, 5.2])
>>> x2 = ivy.array([-1, 0, 5.2, 4, 5])
>>> y = ivy.num_ivy_arrays_on_dev(ivy.default_device())
>>> print(y)
2
>>> x1 = ivy.native_array([-1, 0, 5.2])
>>> y = ivy.num_ivy_arrays_on_dev(ivy.default_device())
>>> print(y)
0
>>> x = ivy.Container(x1=ivy.array([-1]),
... x2=ivy.native_array([-1]))
>>> y = ivy.num_ivy_arrays_on_dev(ivy.default_device())
>>> print(y)
1
"""
return len(ivy.get_all_ivy_arrays_on_dev(device))
@handle_exceptions
@handle_nestable
def print_all_ivy_arrays_on_dev(
*,
device: Optional[Union[ivy.Device, ivy.NativeDevice]] = None,
attr_only: bool = True,
) -> None:
"""Print the shape and dtype for all ivy arrays which are currently alive
on the specified device.
Parameters
----------
device
The device on which to print the arrays
attr_only
Whether or not to only print the `shape` and `dtype` attributes of the array
Examples
--------
>>> x = ivy.array([[1,0,2], [3,2,1]])
>>> y = ivy.dev(x)
>>> ivy.print_all_ivy_arrays_on_dev(y)
((3,), 'int32')
((3,), 'int32')
>>> x = ivy.array([[1,0,2], [3,2,1]])
>>> y = ivy.dev(x)
>>> ivy.print_all_ivy_arrays_on_dev(y, attr_only = False)
[1,0,2]
[3,2,1]
"""
arrs = ivy.get_all_ivy_arrays_on_dev(device).values()
if attr_only:
[print((arr.shape, arr.dtype)) for arr in arrs]
else:
[print(arr) for arr in arrs]
ivy.soft_device_mode = soft_device_mode_stack[-1] if soft_device_mode_stack else False
@handle_exceptions
def set_soft_device_mode(mode: bool) -> None:
"""Set the mode of whether to move input arrays to `ivy.default_device()`
before performing an operation.
Parameter
---------
mode
boolean whether to move input arrays
Examples
--------
>>> ivy.set_soft_device_mode(False)
>>> ivy.soft_device_mode
False
>>> ivy.set_soft_device_mode(True)
>>> ivy.soft_device_mode
True
"""
global soft_device_mode_stack
ivy.utils.assertions.check_isinstance(mode, bool)
soft_device_mode_stack.append(mode)
ivy.__setattr__("soft_device_mode", mode, True)
@handle_exceptions
def unset_soft_device_mode() -> None:
"""Reset the mode of moving input arrays to `ivy.default_device()` before
performing an operation.
Examples
--------
>>> ivy.set_soft_device_mode(False)
>>> ivy.soft_device_mode
False
>>> ivy.unset_soft_device_mode()
>>> ivy.soft_device_mode
True
"""
global soft_device_mode_stack
if soft_device_mode_stack:
soft_device_mode_stack.pop(-1)
mode = soft_device_mode_stack[-1] if soft_device_mode_stack else False
ivy.__setattr__("soft_device_mode", mode, True)
# Retrieval
@handle_exceptions
@handle_backend_invalid
@handle_nestable
@inputs_to_native_arrays
def dev(
x: Union[ivy.Array, ivy.NativeArray], /, *, as_native: bool = False
) -> Union[ivy.Device, ivy.NativeDevice]:
"""Get the native device handle for input array x.
Parameters
----------
x
array for which to get the device handle.
as_native
Whether or not to return the dev in native format. Default is ``False``.
Returns
-------
ret
Device handle for the array.
Examples
--------
With :class:`ivy.Array` input:
>>> x = ivy.array([3, 1, 4, 5])
>>> y = ivy.dev(x)
>>> print(y)
cpu
With :class:`ivy.NativeArray` input:
>>> x = ivy.native_array([[2, 5, 4], [3, 1, 5]])
>>> y = ivy.dev(x, as_native=True)
>>> print(y)
cpu
"""
return ivy.current_backend(x).dev(x, as_native=as_native)
# Conversions
@handle_exceptions
def as_ivy_dev(device: Union[ivy.Device, str], /) -> ivy.Device:
"""Convert device to string representation.
Parameters
----------
device
The device handle to convert to string.
Returns
-------
ret
Device string e.g. 'cuda:0'.
Examples
--------
>>> y = ivy.as_ivy_dev('cpu')
>>> print(y)
cpu
"""
return ivy.current_backend().as_ivy_dev(device)
@handle_exceptions
def as_native_dev(device: Union[ivy.Device, ivy.NativeDevice], /) -> ivy.NativeDevice:
"""Convert device string representation to native device type.
Parameters
----------
device
The device string to convert to native device handle.
A native device handle can be passed in instead - in this case
the unmodified parameter is returned.
Returns
-------
ret
Native device handle.
Examples
--------
With :class:`ivy.Device` input:
>>> ivy.set_backend("numpy")
>>> ivy.as_native_dev("cpu")
'cpu'
>>> ivy.set_backend("tensorflow")
>>> ivy.as_native_dev("tpu:3")
'/TPU:3'
With :class:`ivy.NativeDevice` input:
>>> import torch
>>> device = torch.device("cuda")
>>> device
device(type='cuda')
>>> ivy.as_native_dev(device)
device(type='cuda')
"""
return ivy.current_backend().as_native_dev(device)
# Memory
@handle_exceptions
def clear_cached_mem_on_dev(device: Union[ivy.Device, ivy.NativeDevice], /) -> None:
"""Clear memory cache on target device.
Parameters
----------
device
The device string to convert to native device handle or native device handle.
Examples
--------
>>> import torch
>>> ivy.set_backend("torch")
>>> device = torch.device("cuda")
>>> ivy.clear_cached_mem_on_dev(device)
"""
ivy.current_backend().clear_cached_mem_on_dev(device)
@handle_exceptions
def total_mem_on_dev(device: Union[ivy.Device, ivy.NativeDevice], /) -> float:
"""Get the total amount of memory (in GB) for a given device string. In
case of CPU, the total RAM is returned.
Parameters
----------
device
The device string to convert to native device handle.
Returns
-------
ret
The total memory on the device in GB.
Examples
--------
>>> x = ivy.total_mem_on_dev("cpu")
>>> print(x)
53.66700032
>>> x = ivy.total_mem_on_dev("gpu:0")
>>> print(x)
8.589934592
"""
if "gpu" in device:
handle = _get_nvml_gpu_handle(device)
info = pynvml.nvmlDeviceGetMemoryInfo(handle)
return info.total / 1e9
elif device == "cpu":
return psutil.virtual_memory().total / 1e9
else:
raise ivy.utils.exceptions.IvyException(
'Invalid device string input, must be on the form "gpu:idx" or "cpu", but'
f" found {device}"
)
@handle_exceptions
def used_mem_on_dev(
device: Union[ivy.Device, ivy.NativeDevice],
/,
*,
process_specific: bool = False,
) -> float:
"""Get the used memory (in GB) for a given device string. In case of CPU,
the used RAM is returned.
Parameters
----------
device
The device string to convert to native device handle.
process_specific
Whether to check the memory used by this python process alone. Default is
False.
Returns
-------
ret
The used memory on the device in GB.
Examples
--------
>>> x = ivy.used_mem_on_dev("cpu", process_specific = False)
>>> print(x)
6.219563008
>>> x = ivy.used_mem_on_dev("cpu", process_specific = True)
>>> print(x)
0.902400346
>>> y = ivy.used_mem_on_dev("gpu:0", process_specific = False)
>>> print(y)
0.525205504
"""
ivy.clear_cached_mem_on_dev(device)
if "gpu" in device:
handle = _get_nvml_gpu_handle(device)
if process_specific:
pid = os.getpid()
for process in pynvml.nvmlDeviceGetComputeRunningProcesses(handle):
if process.pid == pid:
return process.usedGpuMemory / 1e9
info = pynvml.nvmlDeviceGetMemoryInfo(handle)
return info.used / 1e9
elif device == "cpu":
if process_specific:
return psutil.Process(os.getpid()).memory_info().rss / 1e9
vm = psutil.virtual_memory()
return (vm.total - vm.available) / 1e9
else:
raise ivy.utils.exceptions.IvyException(
'Invalid device string input, must be on the form "gpu:idx" or "cpu", but'
f" found {device}"
)
@handle_exceptions
def percent_used_mem_on_dev(
device: Union[ivy.Device, ivy.NativeDevice],
/,
*,
process_specific: bool = False,
) -> float:
"""Get the percentage used memory for a given device string. In case of
CPU, the used RAM is returned.
Parameters
----------
device
The device string to convert to native device handle.
process_specific
Whether the check the memory used by this python process alone. Default is
False.
Returns
-------
ret
The percentage used memory on the device.
Examples
--------
>>> x = ivy.percent_used_mem_on_dev("cpu", process_specific = False)
>>> print(x)
94.036902561555
>>> x = ivy.percent_used_mem_on_dev("cpu", process_specific = True)
>>> print(x)
0.7024003467681645
>>> x = ivy.as_native_dev("gpu:0")
>>> y = ivy.percent_used_mem_on_dev(x, process_specific = False)
>>> print(y)
0.7095597456708771
"""
ivy.clear_cached_mem_on_dev(device)
if "gpu" in device:
handle = _get_nvml_gpu_handle(device)
info = pynvml.nvmlDeviceGetMemoryInfo(handle)
if process_specific:
pid = os.getpid()
for process in pynvml.nvmlDeviceGetComputeRunningProcesses(handle):
if process.pid == pid:
return (process.usedGpuMemory / info.total) * 100
return (info.used / info.total) * 100
elif device == "cpu":
vm = psutil.virtual_memory()
if process_specific:
return (psutil.Process(os.getpid()).memory_info().rss / vm.total) * 100
return (1 - (vm.available / vm.total)) * 100
else:
raise ivy.utils.exceptions.IvyException(
'Invalid device string input, must be on the form "gpu:idx" or "cpu", but'
f" found {device}"
)
# Utilization
@handle_exceptions
def dev_util(
device: Union[ivy.Device, ivy.NativeDevice],
/,
) -> float:
"""Get the current utilization (%) for a given device.
Parameters
----------
device
The device string of the device to query utilization for.
Returns
-------
ret
The device utilization (%)
Example
-------
>>> ivy.dev_util('cpu')
13.4
>>> ivy.dev_util('gpu:0')
7.8
>>> ivy.dev_util('cpu')
93.4
>>> ivy.dev_util('gpu:2')
57.4
>>> ivy.dev_util('cpu')
84.2
"""
if device == "cpu":
return psutil.cpu_percent()
elif "gpu" in device:
handle = _get_nvml_gpu_handle(device)
return pynvml.nvmlDeviceGetUtilizationRates(handle).gpu
else:
raise ivy.utils.exceptions.IvyException(
'Invalid device string input, must be on the form "gpu:idx" or "cpu", but'
f" found {device}"
)
# Availability
@handle_exceptions
def gpu_is_available() -> bool:
"""Determine whether a GPU is available to use, with the backend framework.
Returns
-------
ret
Boolean, as to whether a gpu is available.
Examples
--------
>>> print(ivy.gpu_is_available())
False
"""
return ivy.current_backend().gpu_is_available()
@handle_exceptions
def num_cpu_cores(*, logical: bool = True) -> int:
"""Determine the number of cores available in the cpu.
Parameters
----------
logical
Whether request is for number of physical or logical cores available in CPU
Returns
-------
ret
Number of cores available in CPU
Examples
--------
>>> print(ivy.num_cpu_cores(logical=False))
2
"""
if logical:
return psutil.cpu_count(logical=logical)
else:
return psutil.cpu_count(logical=False)
@handle_exceptions
def num_gpus() -> int:
"""Determine the number of available GPUs, with the backend framework.
Returns
-------
ret
Number of available GPUs.
Examples
--------
>>> print(ivy.num_gpus())
1
"""
return ivy.current_backend().num_gpus()
@handle_exceptions
def tpu_is_available() -> bool:
"""Determine whether a TPU is available to use, with the backend framework.
Returns
-------
ret
Boolean, as to whether a tpu is available.
Examples
--------
>>> ivy.set_backend("torch")
>>> print(ivy.tpu_is_available())
False
"""
return ivy.current_backend().tpu_is_available()
# Default Device #
# noinspection PyShadowingNames
@handle_exceptions
def default_device(
device: Optional[Union[ivy.Device, ivy.NativeDevice]] = None,
/,
*,
item: Optional[Union[list, tuple, dict, ivy.Array, ivy.NativeArray]] = None,
as_native: Optional[bool] = None,
) -> Union[ivy.Device, ivy.NativeDevice]:
"""Return the input device or the default device. If the as_native flag is
set, the device will be converted to a native device. If the item is
provided, the item's device is returned. If the device is not provided, the
last default device is returned. If a default device has not been set, the
first gpu is returned if available, otherwise the cpu is returned.
Parameters
----------
device
The device to be returned or converted.
item
The item to get the device from.
as_native
Whether to convert the device to a native device.
Returns
-------
ret
Device handle or string.
Examples
--------
>>> ivy.default_device()
device(type='cpu')
>>> ivy.default_device("gpu:0")
'gpu:0'
>>> ivy.default_device(item=[], as_native=False)
'cpu'
>>> ivy.default_device(item=(), as_native=True)
device(type='cpu')
>>> ivy.default_device(item={"a": 1}, as_native=True)
device(type='cpu')
>>> x = ivy.array([1., 2., 3.])
>>> x = ivy.to_device(x, 'gpu:0')
>>> ivy.default_device(item=x, as_native=True)
device(type='gpu', id=0)
"""
if ivy.exists(device):
if as_native is True:
return ivy.as_native_dev(device)
elif as_native is False:
return ivy.as_ivy_dev(device)
return device
as_native = ivy.default(as_native, False)
if ivy.exists(item):
if isinstance(item, (list, tuple, dict)) and len(item) == 0:
pass
elif ivy.is_array(item):
return ivy.dev(item, as_native=as_native)
global default_device_stack
if not default_device_stack:
ret = "cpu"
else:
ret = default_device_stack[-1]
if as_native:
return ivy.as_native_dev(ret)
return ivy.as_ivy_dev(ret)
@handle_exceptions
def set_default_device(device: Union[ivy.Device, ivy.NativeDevice], /) -> None:
"""Sets the default device to the argument provided in the function.
Parameters
----------
device
The device to be set as the default device.
Returns
-------
ret
The new default device.
Examples
--------
>>> ivy.default_device()
'cpu'
>>> ivy.set_backend('jax')
>>> ivy.set_default_device('gpu:0')
>>> ivy.default_device()
'gpu:0'
>>> ivy.set_backend('torch')
>>> ivy.set_default_device('gpu:1')
>>> ivy.default_device()
'gpu:1
>>> ivy.set_backend('tensorflow')
>>> ivy.set_default_device('tpu:0)
>>> ivy.default_device()
'tpu:0
>>> ivy.set_backend('paddle')
>>> ivy.set_default_device('cpu)
>>> ivy.default_device()
'cpu'
>>> ivy.set_backend('mxnet')
>>> ivy.set_default_device('cpu')
>>> ivy.default_device()
'cpu'
"""
global default_device_stack
default_device_stack.append(device)
@handle_exceptions
def unset_default_device() -> None:
"""Reset the default device to "cpu".
Examples
--------
>>> ivy.set_default_device("gpu:0")
>>> ivy.default_device()
"gpu:0"
>>> ivy.unset_default_device()
>>> ivy.default_device()
"cpu"
"""
global default_device_stack
if default_device_stack:
default_device_stack.pop(-1)
# Device Allocation #
@handle_exceptions
@handle_backend_invalid
@handle_nestable
@handle_array_like_without_promotion
@handle_out_argument
@to_native_arrays_and_back
def to_device(
x: Union[ivy.Array, ivy.NativeArray],
device: Union[ivy.Device, ivy.NativeDevice],
/,
*,
stream: Optional[Union[int, Any]] = None,
out: Optional[ivy.Array] = None,
) -> ivy.Array:
"""Move the input array x to the desired device, specified by device
string.
Parameters
----------
x
input array to be moved to the desired device
device
device to move the input array `x` to
stream
stream object to use during copy. In addition to the types supported in
array.__dlpack__(), implementations may choose to support any library-specific
stream object with the caveat that any code using such an object would not be
portable.
out
optional output array, for writing the result to. It must have a shape that the
inputs broadcast to.
Returns
-------
ret
input array x placed on the desired device
Examples
--------
>>> x = ivy.array([1., 2., 3.])
>>> x = ivy.to_device(x, 'cpu')
>>> print(x.device)
cpu
"""
return ivy.current_backend(x).to_device(x, device, stream=stream, out=out)
# Function Splitting #
@handle_exceptions
def split_factor(
device: Optional[Union[ivy.Device, ivy.NativeDevice]] = None,
/,
) -> float:
"""Get a device's global split factor, which can be used to scale the
device's batch splitting chunk sizes across the codebase.
If the global split factor is set for a given device,
returns the split factor value for the device from the split factors dictionary
If the global split factor for a device is not configured,
returns the default value which is 0.0
Parameters
----------
device
The device to query the split factor for. Sets the default device by default.
Returns
-------
ret
The split factor for the specified device.
Examples
--------
>>> x = ivy.split_factor()
>>> print(x)
0.0
>>> y = ivy.split_factor("gpu:0")
>>> print(y)
0.0
"""
global split_factors
device = ivy.default(device, default_device())
return split_factors.setdefault(device, 0.0)
@handle_exceptions
def set_split_factor(
factor: float, /, *, device: Optional[Union[ivy.Device, ivy.NativeDevice]] = None
) -> None:
"""Set the global split factor for a given device, which can be used to
scale batch splitting chunk sizes for the device across the codebase.
Parameters
----------
factor
The factor to set the device-specific split factor to.
device
The device to set the split factor for. Sets the default device by default.
Examples
--------
>>> print(ivy.default_device())
cpu
>>> ivy.set_split_factor(0.5)
>>> print(ivy.split_factors)
{'cpu': 0.5}
>>> import torch
>>> ivy.set_backend("torch")
>>> device = torch.device("cuda")
>>> ivy.set_split_factor(0.3, device=device)
>>> print(ivy.split_factors)
{device(type='cuda'): 0.3}
>>> ivy.set_split_factor(0.4, device="tpu")
>>> print(ivy.split_factors)
{'tpu': 0.4}
>>> import torch
>>> ivy.set_backend("torch")
>>> device = torch.device("cuda")
>>> ivy.set_split_factor(0.2)
>>> ivy.set_split_factor(0.3, device='gpu')
>>> print(ivy.split_factors)
{'cpu': 0.2, 'gpu': 0.3}
"""
ivy.utils.assertions.check_less(0, factor, allow_equal=True, as_array=False)
global split_factors
device = ivy.default(device, default_device())
split_factors[device] = factor
@handle_exceptions
def split_func_call(
func: Callable,
inputs: Union[ivy.Array, ivy.NativeArray],
mode: str,
/,
*,
max_chunk_size: Optional[int] = None,
chunk_size: Optional[int] = None,
input_axes: Union[int, Iterable[int]] = 0,
output_axes: Optional[Union[int, Iterable[int]]] = None,
stop_gradients: bool = False,
device: Optional[Union[ivy.Device, ivy.NativeDevice]] = None,
) -> Union[ivy.Array, ivy.NativeArray]:
"""Call a function by splitting its inputs along a given axis, and calling
the function in chunks, rather than feeding the entire input array at once.
This can be useful to reduce memory usage of the device the arrays are on.
Parameters
----------
func
The function to be called.
inputs
A list of inputs to pass into the function.
mode
The mode by which to unify the return values, must be one of
[ concat | mean | sum ]
max_chunk_size
The maximum size of each of the chunks to be fed into the function.
chunk_size
The size of each of the chunks to be fed into the function. Specifying this arg
overwrites the global split factor. Default is ``None``.
input_axes
The axes along which to split each of the inputs, before passing to the
function. Default is ``0``.
output_axes
The axes along which to concat each of the returned outputs. Default is same as
fist input axis.
stop_gradients
Whether to stop the gradients for each computed return. Default is ``False``.
device
The device to set the split factor for. Sets the default device by default.
Returns
-------
ret
The return from the function, following input splitting and re-concattenation.
"""
if isinstance(input_axes, int):
input_axes = [input_axes] * len(inputs)
if not ivy.exists(max_chunk_size) and not ivy.exists(chunk_size):
shape_key = "_".join([str(inp.shape) for inp in inputs])
if shape_key in max_chunk_sizes:
max_chunk_size = max_chunk_sizes[shape_key]
else:
max_chunk_size = 0
max_dim = max(inp.cont_shape[inp_ax] for inp, inp_ax in zip(inputs, input_axes))
if max_dim > max_chunk_size:
max_chunk_sizes[shape_key] = max_dim
max_chunk_size = max_dim
chunk_size = ivy.default(
chunk_size,
default_val=lambda: 1
+ int(
round((max_chunk_size - 1) * ivy.split_factor(ivy.default_device(device)))
),
with_callable=True,
)
dim_size = inputs[0].shape[input_axes[0]]
if chunk_size >= dim_size:
return func(*inputs)
num_chunks = dim_size / chunk_size
num_chunks_floored = math.floor(num_chunks)
num_chunks_ceiled = math.ceil(num_chunks)
chunk_sizes = [chunk_size] * num_chunks_floored
if num_chunks != num_chunks_floored:
chunk_sizes.append(dim_size - chunk_size * num_chunks_floored)
inputs_split = [
(
ivy.split(
inp,
num_or_size_splits=chunk_sizes,
axis=input_axes[i],
with_remainder=True,
)
if ivy.is_array(inp)
else inp.split(
num_or_size_splits=chunk_sizes, axis=input_axes[i], with_remainder=True
)
)
for i, inp in enumerate(inputs)
]
is_mean = mode == "mean"
is_sum = mode == "sum"
post_fn = ivy.stop_gradient if stop_gradients else lambda x: x
if is_mean or is_sum:
sums = None
for inps in zip(*inputs_split):
if not sums:
sums = func(*inps)
sums = (
[post_fn(s) for s in sums]
if isinstance(sums, tuple)
else [post_fn(sums)]
)
else:
ret = func(*inps)
if isinstance(ret, tuple):
for i, r in enumerate(ret):
sums[i] = sums[i] + post_fn(r)
else:
sums[0] = sums[0] + post_fn(ret)
sums_or_means = [s / num_chunks_ceiled for s in sums] if is_mean else sums
return sums_or_means[0] if len(sums_or_means) == 1 else tuple(sums_or_means)
rets = [func(*i) for i in zip(*inputs_split)]
rets = [
tuple(post_fn(r) for r in ret) if isinstance(ret, tuple) else (post_fn(ret),)
for ret in rets
]
num_outputs = len(rets[0])
if output_axes is None:
output_axes = [input_axes[0]] * num_outputs
elif isinstance(output_axes, int):
output_axes = [output_axes] * num_outputs
ret = [
ivy.concat([r[i] for r in rets], axis=output_axes[i])
for i in range(num_outputs)
]
return ret[0] if len(ret) == 1 else ret
def _is_valid_devices_attributes(fn: Callable) -> bool:
if hasattr(fn, "supported_devices") and hasattr(fn, "unsupported_devices"):
fn_supported_devices = fn.supported_devices
fn_unsupported_devices = fn.unsupported_devices
if isinstance(fn_supported_devices, dict):
if isinstance(fn_unsupported_devices, dict):
backend_str = ivy.current_backend_str()
if (
backend_str in fn_supported_devices
and backend_str in fn_unsupported_devices
):
return False
elif isinstance(fn_unsupported_devices, tuple):
return False
return True
def _get_devices(fn: Callable, complement: bool = True) -> Tuple:
valid_devices = ivy.valid_devices
invalid_devices = ivy.invalid_devices
all_devices = ivy.all_devices
supported = set(ivy.valid_devices)
is_backend_fn = "backend" in fn.__module__
is_frontend_fn = "frontend" in fn.__module__
is_einops_fn = hasattr(fn, "__name__") and "einops" in fn.__name__
if not is_backend_fn and not is_frontend_fn and not is_einops_fn:
if complement:
supported = set(all_devices).difference(supported)
return supported
# Their values are formatted like either
# 1. fn.supported_devices = ("cpu",)
# Could also have the "all" value for the framework
basic = [
("supported_devices", set.intersection, valid_devices),
("unsupported_devices", set.difference, invalid_devices),
]
for key, merge_fn, base in basic:
if hasattr(fn, key):
v = getattr(fn, key)
if "einops" in fn.__name__ and isinstance(v, dict):
v = v.get(ivy.current_backend_str(), base)
ivy.utils.assertions.check_isinstance(v, tuple)
supported = merge_fn(supported, set(v))
if complement:
supported = set(all_devices).difference(supported)
return tuple(supported)
@handle_exceptions
@handle_nestable
def function_supported_devices(
fn: Callable, recurse: bool = True
) -> Union[Tuple, dict]:
"""Return the supported devices of the current backend's function. The
function returns a dict containing the supported devices for the
compositional and primary implementations in case of partial mixed
functions.
Parameters
----------
fn
The function to check for the supported device attribute
recurse
Whether to recurse into used ivy functions. Default is ``True``.
Returns
-------
ret
Tuple or dict containing the supported devices of the function
Examples
--------
>>> import ivy
>>> ivy.set_backend('numpy')
>>> print(ivy.function_supported_devices(ivy.ones))
('cpu',)
>>> ivy.set_backend('torch')
>>> x = ivy.function_supported_devices(ivy.ones)
>>> x = sorted(x)
('cpu', 'gpu')
"""
ivy.utils.assertions.check_true(
_is_valid_devices_attributes(fn),
"supported_devices and unsupported_devices attributes cannot both "
"exist in a particular backend",
)
if hasattr(fn, "partial_mixed_handler"):
return {
"compositional": function_supported_devices(fn.compos, recurse=recurse),
"primary": _get_devices(fn, complement=False),
}
else:
supported_devices = set(_get_devices(fn, complement=False))
if recurse:
supported_devices = ivy.functional.data_type._nested_get(
fn, supported_devices, set.intersection, function_supported_devices
)
return (
supported_devices
if isinstance(supported_devices, dict)
else tuple(supported_devices)
)
@handle_exceptions
@handle_nestable
def function_unsupported_devices(
fn: Callable, recurse: bool = True
) -> Union[Tuple, dict]:
"""Return the unsupported devices of the current backend's function. The
function returns a dict containing the unsupported devices for the
compositional and primary implementations in case of partial mixed
functions.
Parameters
----------
fn
The function to check for the unsupported device attribute
recurse
Whether to recurse into used ivy functions. Default is ``True``.
Returns
-------
ret
Tuple or dict containing the unsupported devices of the function
Examples
--------
>>> print(ivy.function_unsupported_devices(ivy.ones))
('tpu',)
"""
ivy.utils.assertions.check_true(
_is_valid_devices_attributes(fn),
"supported_devices and unsupported_devices attributes cannot both "
"exist in a particular backend",
)
if hasattr(fn, "partial_mixed_handler"):
return {
"compositional": function_unsupported_devices(fn.compos, recurse=recurse),
"primary": _get_devices(fn, complement=True),
}
else:
unsupported_devices = set(_get_devices(fn, complement=True))
if recurse:
unsupported_devices = ivy.functional.data_type._nested_get(
fn, unsupported_devices, set.union, function_unsupported_devices
)
return (
unsupported_devices
if isinstance(unsupported_devices, dict)
else tuple(unsupported_devices)
)
# Profiler #
class Profiler(abc.ABC):
"""The profiler class is used to profile the execution of some code.
Parameters
----------
save_dir
The directory to save the profile data to.
"""
def __init__(self, save_dir: str):
self._save_dir = save_dir
@abc.abstractmethod
def start(self):
"""Start the profiler.
This should be called before the code to be profiled.
"""
raise ivy.utils.exceptions.IvyNotImplementedException
@abc.abstractmethod
def stop(self):
"""Stop the profiler.
This should be called after the code to be profiled.
"""
raise ivy.utils.exceptions.IvyNotImplementedException
@abc.abstractmethod
def __enter__(self):
raise ivy.utils.exceptions.IvyNotImplementedException
@abc.abstractmethod
def __exit__(self, exc_type, exc_val, exc_tb):
raise ivy.utils.exceptions.IvyNotImplementedException
| ivy/ivy/functional/ivy/device.py/0 | {
"file_path": "ivy/ivy/functional/ivy/device.py",
"repo_id": "ivy",
"token_count": 15527
} | 47 |
"""Collection of Ivy functions for nested objects."""
# global
from builtins import map as _map
from typing import Callable, Any, Union, List, Tuple, Optional, Dict, Iterable, Sequence
from collections import UserDict, OrderedDict
# local
import ivy
from ivy.utils.exceptions import handle_exceptions
# Extra #
# ------#
@handle_exceptions
def index_nest(
nest: Union[List, Tuple, Dict, ivy.Array, ivy.NativeArray, ivy.Container],
index: Union[List[int], Tuple[int], Iterable[int]],
/,
) -> Any:
"""Index a nested object, using a tuple of indices or keys in the case of
dicts.
Parameters
----------
nest
The nested object to index.
index
A tuple of indices for indexing.
Returns
-------
ret
The result element through indexing the nested object.
Examples
--------
With :code:`Tuple` inputs:
>>> x = (1, 2)
>>> y = [0]
>>> z = ivy.index_nest(x, y)
>>> print(z)
1
With :class:`ivy.Array` inputs:
>>> x = ivy.array([[1., 2.],
... [3., 4.]])
>>> y = [1]
>>> z = ivy.index_nest(x, y)
>>> print(z)
ivy.array([3., 4.])
With :class:`ivy.Container` inputs:
>>> x = ivy.Container(a = ivy.array([[1.,2.], [3.,4.]]),
... b = (50,60))
>>> y = [1]
>>> z = ivy.index_nest(x, y)
>>> print(z)
{
a: ivy.array([3., 4.]),
b: 60
}
With :code:`Dict` input:
>>> x = {'a': 0, 'b': [1, [2, 3]], 'c': (4, 5)}
>>> y = ('b', 1)
>>> z = ivy.index_nest(x, y)
>>> print(z)
[2, 3]
With :code:`List` inputs:
>>> x = [['a', 'b', 'c'],
... ['d', 'e', 'f'],
... ['g', ['h', 'i']]]
>>> y = iter([2, 1, 0])
>>> z = ivy.index_nest(x, y)
>>> print(z)
h
"""
ret = nest
for i in index:
ret = ret[i]
return ret
@handle_exceptions
def prune_nest_at_index(nest: Iterable, index: Tuple, /) -> None:
"""Prune a nested object at a specified index.
Parameters
----------
nest
The nested object to prune.
index
A tuple of indices for the index at which to prune.
"""
if len(index) == 1:
del nest[index[0]]
else:
prune_nest_at_index(nest[index[0]], index[1:])
@handle_exceptions
def set_nest_at_index(
nest: Union[ivy.Array, ivy.NativeArray, ivy.Container, Dict, List, Tuple],
index: Sequence[Union[str, int]],
value: Any,
/,
shallow: bool = True,
_result: Union[ivy.Array, ivy.NativeArray, ivy.Container, Dict, List, Tuple] = None,
) -> Union[ivy.Array, ivy.NativeArray, ivy.Container, Dict, List, Tuple]:
"""Set the value of a nested item at a specified index.
Parameters
----------
nest
The nested object to update.
index
A tuple of indices for the index at which to update.
value
The new value for updating.
shallow
Whether to inplace update the input nest or not
Only works if nest is a mutable type. Default is ``True``.
_result
Placeholder for the result of the update. do not set this parameter.
Returns
-------
ret
nest with changed value at the given index.
Examples
--------
With :class:`ivy.Array` inputs:
>>> x = ivy.array([[1., 2.], [3., 4.]])
>>> y = (1, 1)
>>> z = 5.
>>> ivy.set_nest_at_index(x, y, z)
>>> print(x)
ivy.array([[1., 2.], [3., 5.]])
>>> x = ivy.array([1., 2., 3., 4.])
>>> y = [1]
>>> z = 5.
>>> ivy.set_nest_at_index(x, y, z)
>>> print(x)
ivy.array([1., 5., 3., 4.])
With :code:`Dict` input:
>>> x = {1 : [1, [2, 3]], 2: (4, 5)}
>>> y = (1, 1)
>>> z = 2
>>> ivy.set_nest_at_index(x, y, z)
>>> print(x)
{1: [1, 2], 2: (4, 5)}
With :code:`List` inputs:
>>> x = [['a', 'b', 'c'],
... ['d', 'e', 'f'],
... ['g', ['h', 'i']]]
>>> y = (2, 1, 0)
>>> z = 'H'
>>> ivy.set_nest_at_index(x, y, z)
>>> print(x)
[['a','b','c'],['d','e','f'],['g',['H','i']]]
With :class:`ivy.Container` input:
>>> x = ivy.Container(a=ivy.array([1., 2.]) , b=ivy.array([4., 5.]))
>>> y = ('b',)
>>> z = ivy.array([3., 4.])
>>> ivy.set_nest_at_index(x, y, z)
>>> print(x)
{
a: ivy.array([1., 2.]),
b: ivy.array([3., 4.])
}
"""
is_tuple = isinstance(nest, tuple)
nest_type = type(nest) if is_tuple else lambda x: x
if _result is None:
if shallow:
_result = nest_type(nest)
else:
_result = copy_nest(nest, include_derived=True)
_result = list(_result) if is_tuple else _result
if len(index) == 1:
if shallow:
try:
nest[index[0]] = value
except TypeError:
pass
_result[index[0]] = value
else:
_result[index[0]] = set_nest_at_index(
nest[index[0]], index[1:], value, shallow, _result[index[0]]
)
try:
_result = nest_type(_result)
except TypeError:
_result = nest_type(*_result)
return _result
@handle_exceptions
def insert_into_nest_at_index(nest: Iterable, index: Tuple, value) -> None:
"""Recursively inserts a value into a nested data structure at a specified
index.
This function traverses a nested data structure and inserts the provided `value`
at the specified `index`.
Parameters
----------
nest : Iterable
The nested data structure.
index : Tuple
The index specifying the location where the `value` should be inserted.
value : object
The value to be inserted.
Returns
-------
None
Examples
--------
>>> nest = [[1, 2], [3, 4]]
>>> index = (1, 1)
>>> value = 99
>>> ivy.insert_into_nest_at_index(nest, index, value)
>>> print(nest)
[[1, 2], [3, 99, 4]]
"""
if isinstance(nest, (dict, ivy.Container)):
if len(index) == 1:
key = index[0]
if isinstance(nest, dict):
nest[key] = value
else:
key = index[0]
if key in nest:
insert_into_nest_at_index(nest[key], index[1:], value)
else:
nest[key] = {}
insert_into_nest_at_index(nest[key], index[1:], value)
else:
if len(index) == 1:
idx = index[0]
if isinstance(nest, list):
nest.insert(idx, value)
else:
nest[index[0]] = value
else:
insert_into_nest_at_index(nest[index[0]], index[1:], value)
@handle_exceptions
def map_nest_at_index(
nest: Union[ivy.Array, ivy.NativeArray, ivy.Container, Dict, List],
index: Sequence[Union[str, int]],
fn: Callable[[Any], Any],
/,
shallow: bool = True,
_result: Union[ivy.Array, ivy.NativeArray, ivy.Container, Dict, List] = None,
) -> Union[ivy.Array, ivy.NativeArray, ivy.Container, Dict, List, Tuple]:
"""Map a function to the value of a nested item at a specified index.
Parameters
----------
nest
The nested object to update.
index
A linear sequence of indices for the index at which to update.
fn
The function to perform on the nested value at the given index.
shallow
Whether to inplace update the input nest or not
Only works if nest is a mutable type. Default is ``True``.
_result
Placeholder for the result of the update. do not set this parameter.
Returns
-------
ret
nest with applicable of fn on given index.
Examples
--------
With :class:`ivy.Array` inputs:
>>> x = ivy.array([[1., 2.], [3., 4.]])
>>> y = (1, 1)
>>> z = lambda a: a + 1.
>>> ivy.map_nest_at_index(x, y, z)
>>> print(x)
ivy.array([[1., 2.], [3., 5.]])
>>> x = ivy.array([1., 2., 3., 4.])
>>> y = [1]
>>> z = lambda a: a + 3.
>>> ivy.map_nest_at_index(x, y, z)
>>> print(x)
ivy.array([1., 5., 3., 4.])
With :code:`Dict` input:
>>> x = {1 : [1, [2, 3]], 2: (4, 5)}
>>> y = (1, 1)
>>> z = lambda _: 2
>>> ivy.map_nest_at_index(x, y, z)
>>> print(x)
{1: [1, 2], 2: (4, 5)}
With :code:`List` inputs:
>>> x = [['a', 'b', 'c'],
... ['d', 'e', 'f'],
... ['g', ['h', 'i']]]
>>> y = (2, 1, 0)
>>> z = lambda a: a + 'H'
>>> ivy.map_nest_at_index(x, y, z)
>>> print(x)
[['a','b','c'],['d','e','f'],['g',['hH','i']]]
With :class:`ivy.Container` input:
>>> x = ivy.Container(a=ivy.array([1., 2.]) , b=ivy.array([4., 5.]))
>>> y = ('b',)
>>> z = lambda _: ivy.array([3., 4.])
>>> ivy.map_nest_at_index(x, y, z)
>>> print(x)
{
a: ivy.array([1., 2.]),
b: ivy.array([3., 4.])
}
"""
is_tuple = isinstance(nest, tuple)
nest_type = type(nest) if is_tuple else lambda x: x
if _result is None:
if shallow:
_result = nest_type(nest)
else:
_result = copy_nest(nest, include_derived=True)
_result = list(_result) if is_tuple else _result
if len(index) == 1:
ret = fn(nest[index[0]])
if shallow:
try:
nest[index[0]] = ret
except TypeError:
pass
_result[index[0]] = ret
else:
_result[index[0]] = map_nest_at_index(
nest[index[0]], index[1:], fn, shallow, _result[index[0]]
)
try:
_result = nest_type(_result)
except TypeError:
try:
_result = nest_type(*_result)
except TypeError:
pass
return _result
@handle_exceptions
def multi_index_nest(
nest: Union[List, Dict, Tuple, ivy.Array, ivy.NativeArray, ivy.Container],
indices: Iterable[Iterable[int]],
/,
) -> Iterable[Any]:
"""Repeatedly index a nested object, using a tuple of tuples of indices or
keys in the case of dicts.
Parameters
----------
nest
The nested object to slice.
indices
A tuple of tuples of indices to apply.
Returns
-------
ret
The result elements through indexing the nested object.
Examples
--------
With :code:`Tuple` inputs:
>>> x = (1, 2)
>>> y = [[0]]
>>> z = ivy.multi_index_nest(x, y)
>>> print(z)
[1]
With :class:`ivy.Array` inputs:
>>> x = ivy.array([[1., 2.],
... [3., 4.]])
>>> y = [[0],[1]]
>>> z = ivy.multi_index_nest(x, y)
>>> print(z)
[ivy.array([1., 2.], ivy.array([3., 4.])]
With :class:`ivy.Container` input:
>>> x = ivy.Container(a=ivy.array([1,2]),
... b=[30,40])
>>> y = ('a', ('b', 0))
>>> z = ivy.multi_index_nest(x, y)
>>> print(z)
[ivy.array([1, 2]), 30]
With :code:`Dict` input:
>>> x = {'a': 0, 'b': [1, [2, 3]], 'c': (4, 5)}
>>> y = (('b', 1), 'a')
>>> z = ivy.multi_index_nest(x, y)
>>> print(z)
[[2, 3], 0]
With :code:`List` inputs:
>>> x = [['a', 'b', 'c'],
... ['d', 'e', 'f'],
... ['g', ['h', 'i']]]
>>> y = [[2, 1, 0], [0, 1]]
>>> z = ivy.multi_index_nest(x, y)
>>> print(z)
['h', 'b']
"""
return [index_nest(nest, index) for index in indices]
@handle_exceptions
def prune_nest_at_indices(nest: Iterable, indices: Tuple, /) -> None:
"""Prune a nested object at specified indices.
Parameters
----------
nest
The nested object to prune.
indices
A tuple of tuples of indices for the indices at which to prune.
"""
# Delete first deeper elements and elements with larger index
indices_sorted = sorted(
indices,
key=str,
reverse=True,
)
[prune_nest_at_index(nest, index) for index in indices_sorted]
@handle_exceptions
def set_nest_at_indices(
nest: Union[List, Tuple, Dict, ivy.Array, ivy.NativeArray],
indices: Union[List[int], Tuple[int], Iterable[int]],
values: Union[List[int], Tuple[int], Iterable[int]],
/,
shallow: bool = True,
) -> Union[ivy.Array, ivy.NativeArray, ivy.Container, Dict, List, Tuple]:
"""Set the value of a nested item at specified indices with specified
values.
Parameters
----------
nest
The nested object to update.
indices
A tuple of tuples of indices for the indices at which to update.
values
The new values for updating.
shallow
Whether to inplace update the input nest or not
Only works if nest is a mutable type. Default is ``True``.
Returns
-------
ret
nest with updated values at the given indices.
Examples
--------
With :code:`List` inputs:
>>> nest = [[1, 2, 3, 4, 5, 6], ['a', 'b', 'c', 'd', 'e', 'f']]
>>> indices = [[0, 4], [1, 3]]
>>> values = [111, 'x']
>>> ivy.set_nest_at_indices(nest, indices, values)
>>> print(nest)
[[1, 2, 3, 4, 111, 6], ['a', 'b', 'c', 'x', 'e', 'f']]
With :code:`Tuple` inputs:
>>> nest = [['abc', 'xyz', 'pqr'],[1, 4, 'a', 'b']]
>>> indices = ((0, 1),(1, 2))
>>> values = ('ivy', 'x')
>>> ivy.set_nest_at_indices(nest, indices, values)
>>> print(nest)
(['abc', 'ivy', 'pqr'], [1, 4, 'x', 'b'])
With :code:`Dict` input:
>>> nest = {'a': [1., 2., 3.], 'b': [4., 5., 6.], 'c': [0.]}
>>> indices = (('a', 1), ('b', 2), ('c', 0))
>>> values = (11., 22., 33.)
>>> ivy.set_nest_at_indices(nest, indices, values)
>>> print(nest)
{'a': [1.0, 11.0, 3.0], 'b': [4.0, 5.0, 22.0], 'c': [33.0]}
With :class:`ivy.Array` inputs:
>>> nest = ivy.array([[1., 2., 3.],[4., 5., 6.]])
>>> indices = ((0, 1),(1, 2))
>>> values = (11., 22.)
>>> ivy.set_nest_at_indices(nest, indices, values)
>>> print(nest)
ivy.array([[1., 11., 3.], [4., 5., 22.]])
"""
is_tuple = isinstance(nest, tuple)
nest_type = type(nest) if is_tuple else lambda x: x
if shallow:
result = nest_type(nest)
else:
result = copy_nest(nest, include_derived=True)
result = list(result) if is_tuple else result
if not isinstance(values, (list, tuple)):
values = [values] * len(indices)
for index, value in zip(indices, values):
result = set_nest_at_index(nest, index, value, _result=result, shallow=shallow)
try:
result = nest_type(result)
except TypeError:
result = nest_type(*result)
return result
@handle_exceptions
def insert_into_nest_at_indices(nest: Iterable, indices: Tuple, values, /) -> None:
"""Insert a value into the nested item at specified indices with specified
values.
Parameters
----------
nest
The nested object to insert into.
indices
A tuple of tuples of indices for the indices at which to insert
values.
values
The new values for inserting.
"""
if not isinstance(values, (list, tuple)):
values = [values] * len(indices)
[
insert_into_nest_at_index(nest, index, value)
for index, value in zip(indices, values)
]
@handle_exceptions
def map_nest_at_indices(
nest: Iterable,
indices: Tuple,
fn: Callable,
/,
shallow: bool = True,
) -> Union[ivy.Array, ivy.NativeArray, ivy.Container, Dict, List, Tuple]:
"""Map a function to the values of a nested item at the specified indices.
Parameters
----------
nest
The nested object to update.
indices
A tuple of tuples of indices for the indices at which to update.
fn
The function to perform on the nest at the given index.
shallow
Whether to inplace update the input nest or not
Only works if nest is a mutable type. Default is ``True``.
Returns
-------
ret
nest with applicable of fn on given indices.
Examples
--------
With :code:`List` inputs:
>>> nest = [['a', 'c', 'e', 'd', 'u', 'k'], ['m', 'n', 'f', 'p', 'q', 't']]
>>> indices = [[0, 4], [1, 5]]
>>> function = lambda x : x + 'b'
>>> ivy.map_nest_at_indices(nest, indices, function)
>>> print(nest)
[['a', 'c', 'e', 'd', 'ub', 'k'], ['m', 'n', 'f', 'p', 'q', 'tb']]
With :code:`Tuple` inputs:
>>> nest = ([-9, 8, -27],[9, -4, -5, 7])
>>> indices = ((0, 2),(1, 0),(1, 2))
>>> function = abs
>>> ivy.map_nest_at_indices(nest, indices, function)
>>> print(nest)
([-9, 8, 27], [9, -4, 5, 7])
With :code:`Dict` input:
>>> nest = {'a': [8., 16., 22.], 'b': [10., 44., 81.], 'c': [9., 75., 37.]}
>>> indices = (('a', 2), ('b', 0), ('c', 1))
>>> function = lambda x : x + 1
>>> ivy.map_nest_at_indices(nest, indices, function)
>>> print(nest)
{'a': [8.0, 16.0, 23.0], 'b': [11.0, 44.0, 81.0], 'c': [9.0, 76.0, 37.0]}
With :class:`ivy.Array` inputs:
>>> nest = ivy.array([[-9., 8., -17.],[11., -3., 5.]])
>>> indices = ((0, 1),(1, 1),(1, 2))
>>> function = lambda x : x ** 2
>>> ivy.map_nest_at_indices(nest, indices, function)
>>> print(nest)
ivy.array([[ -9., 64., -17.],
[ 11., 9., 25.]])
"""
is_tuple = isinstance(nest, tuple)
nest_type = type(nest) if is_tuple else lambda x: x
if shallow:
result = nest_type(nest)
else:
result = copy_nest(nest, include_derived=True)
result = list(result) if is_tuple else result
for i, index in enumerate(indices):
result = map_nest_at_index(nest, index, fn, _result=result, shallow=shallow)
try:
result = nest_type(result)
except TypeError:
result = nest_type(*result)
return result
@handle_exceptions
def nested_argwhere(
nest: Iterable,
fn: Callable,
check_nests: bool = False,
to_ignore: Optional[Union[type, Tuple[type]]] = None,
_index: Optional[List] = None,
_base: bool = True,
stop_after_n_found: Optional[int] = None,
) -> Union[Iterable, bool]:
"""Check the leaf nodes of nested x via function fn, and returns all nest
indices where the method evaluates as True.
Parameters
----------
nest
The nest to check the leaves of.
fn
The condition function, returning True or False.
check_nests
Whether to also check the nests for the condition, not only nest leaves.
Default is ``False``.
to_ignore
Types to ignore when deciding whether to go deeper into the nest or not
_index
The indices detected so far. None at the beginning. Used internally, do not set
manually.
_base
Whether the current function call is the first function call in the recursive
stack. Used internally, do not set manually.
stop_after_n_found
to stop after some needed indices are found.
Returns
-------
ret
A set of indices for the nest where the function evaluated as True.
Examples
--------
With :code:`List` input:
>>> nest = [[[1, -2, 3], 19], [[9, -36, 80], -10.19]]
>>> fun = ivy.abs
>>> nested_indices = ivy.nested_argwhere(nest, fn=fun)
>>> print(nested_indices)
[
[0, 0, 0], [0, 0, 1],
[0, 0, 2], [0, 1],
[1, 0, 0], [1, 0, 1],
[1, 0, 2], [1, 1]
]
With :code:`Tuple` input:
>>> nest = ([-5, 9, 2], [0.3, 4.])
>>> fun = ivy.abs
>>> nested_indices = ivy.nested_argwhere(nest, fn=fun, stop_after_n_found=4)
>>> print(nested_indices)
[[0, 0], [0, 1], [0, 2], [1, 0]]
With :code:`Dict` input:
>>> nest={'a': [2., 0.6, -2.], 'b': [1., 4., 1.9], 'c': [9.4]}
>>> fun = ivy.abs
>>> nested_indices = ivy.nested_argwhere(nest, fn=fun)
>>> print(nested_indices)
[
['a', 0], ['a', 1],
['a', 2], ['b', 0],
['b', 1], ['b', 2],
['c', 0]
]
"""
to_ignore = ivy.default(to_ignore, ())
_index = [] if _index is None else _index
if isinstance(nest, (tuple, list)) and not isinstance(nest, to_ignore):
n = 0
_indices = []
for i, item in enumerate(nest):
ind = (
nested_argwhere(
item,
fn,
check_nests,
to_ignore,
_index + [i],
False,
stop_after_n_found - n,
)
if stop_after_n_found is not None
else nested_argwhere(
item,
fn,
check_nests,
to_ignore,
_index + [i],
False,
None,
)
)
if stop_after_n_found is not None and ind:
if n >= stop_after_n_found:
break
n += len(ind)
_indices += [ind]
if stop_after_n_found is not None and n >= stop_after_n_found:
break
_indices = [idx for idxs in _indices if idxs for idx in idxs]
if check_nests and fn(nest):
_indices.append(_index)
elif (isinstance(nest, (dict, UserDict))) and not isinstance(nest, to_ignore):
n = 0
_indices = []
for k, v in nest.items():
ind = (
nested_argwhere(
v,
fn,
check_nests,
to_ignore,
_index + [k],
False,
stop_after_n_found - n,
)
if stop_after_n_found is not None
else nested_argwhere(
v,
fn,
check_nests,
to_ignore,
_index + [k],
False,
None,
)
)
if stop_after_n_found is not None and ind:
if n >= stop_after_n_found:
break
n += len(ind)
_indices += [ind]
_indices = [idx for idxs in _indices if idxs for idx in idxs]
if check_nests and fn(nest):
_indices.append(_index)
else:
cond_met = fn(nest)
if cond_met:
return [_index]
return False
return [index for index in _indices if index]
@handle_exceptions
def all_nested_indices(
nest: Union[List, Tuple, Dict, ivy.Array, ivy.NativeArray, ivy.Container] = None,
/,
include_nests: bool = False,
_index: Optional[Union[int, Sequence[int]]] = None,
_base: bool = True,
) -> List:
"""Return indices of all the elements in nest.
Parameters
----------
nest
The nest to check the leaves of.
include_nests
Whether to also include indices of the nests themselves, not only
leaves. Default is ``False``.
_index
The indices detected so far. None at the beginning. Used internally,
do not set manually.
_base
Whether the current function call is the first function call in the
recursive stack. Used internally, do not set manually.
Returns
-------
ret
A set of indices of all elements in nest
Examples
--------
With :code:`List` input:
>>> x = [189, [863, 672], [264, 384]]
>>> y = ivy.all_nested_indices(x)
>>> print(y)
[[0], [1, 0], [1, 1], [2, 0], [2, 1]]
With :code:`Tuple` input:
>>> x = (189, (863, 672), (264, 384))
>>> y = ivy.all_nested_indices(x, include_nests=True)
>>> print(y)
[[0], [1, 0], [1, 1], [1], [2, 0], [2, 1], [2]]
With :code:`Dict` input:
>>> x = {'a': 2., 'b': [6., [15., 9.]], 'c': (7., 56.)}
>>> y = ivy.all_nested_indices(x)
>>> print(y)
[['a'], ['b', 0], ['b', 1, 0], ['b', 1, 1], ['c', 0], ['c', 1]]
With :class:`ivy.Array` input:
>>> x = ivy.array([[True, False], [False, False]])
>>> y = ivy.all_nested_indices(x)
>>> print(y)
[[]]
With :class:`ivy.Container` input:
>>> x = ivy.Container(a=ivy.array([412, 948, 482]), b=ivy.array([168, 674, 341]))
>>> y = ivy.all_nested_indices(x)
>>> print(y)
[['a'], ['b']]
"""
_index = [] if _index is None else _index
if isinstance(nest, (tuple, list)):
_indices = [
all_nested_indices(
item,
include_nests,
_index + [i],
False,
)
for i, item in enumerate(nest)
]
_indices = [idx for idxs in _indices if idxs for idx in idxs]
if include_nests:
_indices.append(_index)
elif isinstance(nest, dict):
_indices = [
all_nested_indices(v, include_nests, _index + [k], False)
for k, v in nest.items()
]
_indices = [idx for idxs in _indices if idxs for idx in idxs]
if include_nests:
_indices.append(_index)
else:
return [_index]
return [index for index in _indices if index]
# noinspection PyShadowingBuiltins
@handle_exceptions
def map(
fn: Callable,
constant: Optional[Dict[str, Any]] = None,
unique: Optional[Dict[str, Iterable[Any]]] = None,
mean: bool = False,
) -> List:
"""Apply a function on each item of an iterable x.
Parameters
----------
fn
The function to map onto x.
constant
keyword arguments which remain constant between each function call.
Default is ``None``.
unique
keyword arguments which are unique for each function call. Default is ``None``.
mean
Whether to compute the mean across the return values, and return this mean.
Default is ``False``.
Returns
-------
ret
x following the application of fn to each of its iterated items.
Examples
--------
With :code:`int` inputs:
>>> def special_square(x : float) -> float : return np.square(x)
>>> results = ivy.map(fn = special_square,
... constant = None,
... unique = {'x' : [1,2,3]},
... mean = False)
>>> print(results)
[1, 4, 9]
>>> results = ivy.map(fn = special_square,
... constant = None,
... unique = {'x':[0,1,2]},
... mean = True)
>>> print(results)
1.6666666666666667
>>> def special_pow(x:float,y:float) ->float : return np.power(x,y)
>>> results = ivy.map(fn = special_pow,
... constant = {'y':[0,1]},
... unique = {'x':[1,2,3]},
... mean = False)
>>> print(results)
[array([1,1]),
array([1,2]),
array([1,3])]
>>> results = ivy.map(fn = special_pow,
... constant = {'y':[0,1]},
... unique = {'x':[1,2,3]},
... mean = True)
>>> print(results)
[1. 2.]
With float inputs:
>>> def linear_model(w:float, x:float, b:float) -> float: return w*x + b
>>> results = ivy.map(fn = linear_model,
... constant = {'w':10., 'b':1.},
... unique = {'x':[0.,1.,2.]},
... mean = False)
>>> print(results)
[1.0, 11.0, 21.0]
With :class:`ivy.Array` inputs:
>>> results = ivy.map(fn = linear_model,
... constant = {'w':ivy.array([1.,0.,1.]), 'b':ivy.array([0.,10.,100.])},
... unique = {'x':[ivy.array([0.,1.,0.]), ivy.array([1.,1.,1.])]},
... mean = False)
>>> print(results)
[ivy.array([0., 10., 100.]),
ivy.array([1., 10., 101.])]
>>> results = ivy.map(fn = linear_model,
... constant = {'w':ivy.array([1.,0.,1.]), 'b':ivy.array([0.,10.,100.])},
... unique = {'x':[ivy.array([0.,1.,0.]), ivy.array([1.,1.,1.])]},
... mean = True)
>>> print(results)
ivy.array([ 0.5, 10. , 100. ])
"""
c = ivy.default(constant, {})
u = ivy.default(unique, {})
rets = [
r
for r in _map(
lambda *uv: fn(**dict(**c, **dict(zip(u.keys(), uv)))), *u.values()
)
]
if mean:
rets = sum(rets) / len(rets)
return rets
@handle_exceptions
def nested_map(
fn: Callable,
x: Union[ivy.Array, ivy.NativeArray, Iterable],
/,
include_derived: Optional[Union[Dict[str, bool], bool]] = None,
to_ignore: Optional[Union[type, Tuple[type]]] = None,
to_mutable: bool = False,
_tuple_check_fn: Optional[Callable] = None,
_list_check_fn: Optional[Callable] = None,
_dict_check_fn: Optional[Callable] = None,
shallow: bool = True,
) -> Union[ivy.Array, ivy.NativeArray, Iterable, Dict]:
"""Apply a function on x in a nested manner, whereby all dicts, lists and
tuples are traversed to their lowest leaves before applying the method and
returning x. If x is not nested, the method is applied to x directly.
Parameters
----------
fn
The function to map onto x.
x
The item to apply the mapped function to.
include_derived
Whether to also recursive for classes derived from tuple, list and dict.
Default is ``False``.
to_ignore
Types to ignore when deciding whether to go deeper into the nest or not
to_mutable
Whether to convert the nest to a mutable form, changing all tuples to lists.
Default is ``False``.
_tuple_check_fn
Placeholder for the tuple check function, do not set this parameter.
_list_check_fn
Placeholder for the list check function, do not set this parameter.
_dict_check_fn
Placeholder for the dict check function, do not set this parameter.
shallow
Whether to inplace update the input nest or not
Only works if nest is a mutable type. Default is ``True``.
Returns
-------
ret
x following the application of fn to it's nested leaves, or x itself if x is not
nested.
Examples
--------
With :class:`Tuple` inputs:
>>> x = ([[1., 2.], [3., 4.]])
>>> function = lambda a : a * 2
>>> ivy.nested_map(function, x)
[[2.0, 4.0], [6.0, 8.0]]
>>> print(x)
[[2.0, 4.0], [6.0, 8.0]]
With :code:`Dict` input:
>>> x = {1 : [1, [2, 3]], 2: (4, 5)}
>>> function = lambda a : a + 1
>>> ivy.nested_map(function, x)
{1 : [2, [3, 4]], 2: (5, 6)}
>>> print(x)
{1 : [2, [3, 4]], 2: (5, 6)}
With :code:`List` inputs:
>>> x = [['a', 'b', 'c'],
... ['d', 'e', 'f'],
... ['g', ['h', 'i']]]
>>> function = lambda a: a + 'H'
>>> ivy.nested_map(function, x)
[['aH','bH','cH'],['dH','eH','fH'],['gH',['hH','iH']]]
>>> print(x)
[['aH','bH','cH'],['dH','eH','fH'],['gH',['hH','iH']]]
With :class:`ivy.Container` input:
>>> x = ivy.Container(
... a=ivy.array([[1, 2, 3], [9, 8, 7]]) , b=ivy.array([[4, 5, 6], [12, 13, 14]])
... )
>>> function = lambda a : a + 1
>>> ivy.nested_map(function, x)
>>> print(x)
{
a: ivy.array([[1, 2, 3],
[9, 8, 7]]),
b: ivy.array([[4, 5, 6],
[12, 13, 14]])
}
>>> nest = ([1, 2], [3, 4], [5, 6], {"a": 1, "b": 2, "c": 3})
>>> function = lambda a : a * 2
>>> ivy.nested_map(function, nest, to_ignore=list)
([1, 2, 1, 2], [3, 4, 3, 4], [5, 6, 5, 6], {'a': 2, 'b': 4, 'c': 6})
>>> nest = ([23, 25, 1337], [63, 98, 6])
>>> function = lambda a : a + 1
>>> ivy.nested_map(function, nest, to_mutable=True)
[[24, 25, 1338], [64, 99, 7]]
"""
to_ignore = ivy.default(to_ignore, ())
if include_derived is True:
include_derived = {"tuple": True, "list": True, "dict": True}
elif not include_derived:
include_derived = {}
for t in ("tuple", "list", "dict"):
if t not in include_derived:
include_derived[t] = False
class_instance = type(x)
# TODO: Fixes iterating over tracked instances from the graph
# during transpilation. However, there might be a better fix
# than this. Remove the check below if that's the case
if (
hasattr(x, "is_tracked_proxy")
and hasattr(class_instance, "__bases__")
and not set(class_instance.__bases__).intersection(set(to_ignore))
):
to_ignore += (class_instance,)
tuple_check_fn = ivy.default(
_tuple_check_fn,
(
(lambda x_, t_: isinstance(x_, t_))
if include_derived["tuple"]
else (lambda x_, t_: type(x_) is t_)
),
)
list_check_fn = ivy.default(
_list_check_fn,
(
(lambda x_, t_: isinstance(x_, t_))
if include_derived["list"]
else (lambda x_, t_: type(x_) is t_)
),
)
dict_check_fn = ivy.default(
_dict_check_fn,
(
(lambda x_, t_: isinstance(x_, t_))
if include_derived["dict"]
else (lambda x_, t_: type(x_) is t_)
),
)
if tuple_check_fn(x, tuple) and not isinstance(x, to_ignore):
ret_list = [
nested_map(
fn,
i,
include_derived,
to_ignore,
to_mutable,
tuple_check_fn,
list_check_fn,
dict_check_fn,
shallow,
)
for i in x
]
if to_mutable:
return ret_list
elif hasattr(x, "_fields"):
# noinspection PyProtectedMember
return class_instance(**dict(zip(x._fields, ret_list)))
else:
return class_instance(ret_list)
elif list_check_fn(x, list) and not isinstance(x, to_ignore):
ret_list = [
nested_map(
fn,
i,
include_derived,
to_ignore,
to_mutable,
tuple_check_fn,
list_check_fn,
dict_check_fn,
shallow,
)
for i in x
]
if shallow:
x[:] = ret_list[:]
return x
return class_instance(ret_list)
elif (dict_check_fn(x, dict) or isinstance(x, UserDict)) and not isinstance(
x, to_ignore
):
class_instance = type(x)
ret = {
k: nested_map(
fn,
v,
include_derived,
to_ignore,
to_mutable,
tuple_check_fn,
list_check_fn,
dict_check_fn,
shallow,
)
for k, v in x.items()
}
if shallow:
x.update(ret)
return x
return class_instance(ret)
elif isinstance(x, slice):
# TODO: add tests for this
return slice(*nested_map(fn, [x.start, x.stop, x.step]))
return fn(x)
@handle_exceptions
def nested_any(
nest: Iterable,
fn: Callable,
check_nests: bool = False,
_base: bool = True,
) -> bool:
"""Check the leaf nodes of nest x via function fn, and returns True if any
evaluate to True, else False.
Parameters
----------
nest
The nest to check the leaves of.
fn
The condition function, returning True or False.
check_nests
Whether to also check the nests for the condition, not only nest leaves.
Default is ``False``.
_base
Whether the current function call is the first function call in the recursive
stack. Used internally, do not set manually.
Returns
-------
ret
A boolean, whether the function evaluates to true for any leaf node.
"""
if isinstance(nest, (tuple, list)):
for item in nest:
if nested_any(item, fn, check_nests, False):
return True
if check_nests and fn(nest):
return True
elif isinstance(nest, dict):
for v in nest.values():
if nested_any(v, fn, check_nests, False):
return True
if check_nests and fn(nest):
return True
elif fn(nest):
return True
return False
@handle_exceptions
def copy_nest(
nest: Union[ivy.Array, ivy.NativeArray, Iterable],
/,
include_derived: bool = False,
to_mutable: bool = False,
) -> Union[ivy.Array, ivy.NativeArray, Iterable]:
"""Copy a nest deeply, but without copying leaves of the nest, only the
nest lists, tuples and dicts are copied.
Parameters
----------
nest
The nest to copy.
include_derived
Whether to also recursive for classes derived from tuple, list and dict.
Default is ``False``.
to_mutable
Whether to convert the nest to a mutable form, changing all tuples to lists.
Default is ``False``.
Returns
-------
ret
The copied nest.
Examples
--------
With :class:`ivy.Array` input:
>>> nest = ivy.array([[1.,2.,3.],[7.,8.,9.]])
>>> copied_nest = ivy.copy_nest(nest)
>>> print(copied_nest)
ivy.array([[1., 2., 3.],
[7., 8., 9.]])
With :code:`Iterable` input:
>>> nest = [[1, 2, 3, 4, 5], [23, 24, 25, 26, 27]]
>>> copied_nest = ivy.copy_nest(nest, include_derived = True)
>>> print(copied_nest)
[[1, 2, 3, 4, 5], [23, 24, 25, 26, 27]]
>>> nest = ([23, 25, 1337], [63, 98, 6])
>>> copied_nest = ivy.copy_nest(nest, to_mutable = True)
>>> print(copied_nest)
[[23, 25, 1337], [63, 98, 6]]
>>> nest = {'first': [23., 24., 25], 'second': [46., 48., 50]}
>>> copied_nest = ivy.copy_nest(nest)
>>> print(copied_nest)
{'first': [23.0, 24.0, 25], 'second': [46.0, 48.0, 50]}
"""
class_instance = type(nest)
check_fn = (
(lambda x_, t: isinstance(nest, t))
if include_derived
else (lambda x_, t: type(nest) is t)
)
if check_fn(nest, tuple):
ret_list = [
copy_nest(
i,
include_derived=include_derived,
to_mutable=to_mutable,
)
for i in nest
]
if to_mutable:
return ret_list
if hasattr(nest, "_fields"):
return class_instance(**dict(zip(nest._fields, ret_list)))
return class_instance(tuple(ret_list))
elif check_fn(nest, list):
return class_instance(
[
copy_nest(
i,
include_derived=include_derived,
to_mutable=to_mutable,
)
for i in nest
]
)
elif check_fn(nest, dict):
class_instance = type(nest)
dict_ = {
k: copy_nest(
v,
include_derived=include_derived,
to_mutable=to_mutable,
)
for k, v in nest.items()
}
if isinstance(nest, OrderedDict):
return class_instance(**dict_)
return class_instance(dict_)
return nest
@handle_exceptions
def nested_multi_map(
func: Callable,
nests: List[Iterable],
index_chains=None,
to_apply=True,
prune_unapplied=False,
index_chain="",
config=None,
to_ivy=True,
):
"""Apply function to all array values from a collection of identically
structured ivy arrays.
Parameters
----------
func
Function to apply to each nest entry.
nest
nests to map.
index_chains
The key-chains to apply or not apply the method to. Default is ``None``.
to_apply
If True, the method will be applied to index_chains, otherwise index_chains will
be skipped. Default is ``True``.
prune_unapplied
Whether to prune index_chains for which the function was not applied,
otherwise the leftmost nest value is used. Default is ``False``.
index_chain
Chain of keys for this dict entry (Default value = '')
config
The configuration for the nests. Default is the same as nest0.
to_ivy
convert the output to ivy_arrays. Default is ``True``
Returns
-------
nest containing the result of the function. The structure of the output is the
same as the input with the result of the function applied to each applicable
leaf and the value at that leaf in the first nest for a non-applicable leaf if
prune_unapplied is False else unapplied leaves are pruned.
"""
nest0 = None
for nest in nests:
if isinstance(nest, (tuple, list, dict)):
nest0 = nest
break
if isinstance(nest0, (list, tuple)):
return_nest = []
elif isinstance(nest0, dict):
return_nest = {}
else:
return_nest = None
if nest0 is not None:
is_dict = isinstance(nest0, dict)
for index, val in enumerate(nest0):
if is_dict:
values = [
(
nest[index]
if isinstance(nest, (tuple, list))
else nest[val] if isinstance(nest, dict) else nest
)
for nest in nests
]
else:
values = [
(
nest[index]
if isinstance(nest, (tuple, list))
else nest[list(nest)[index]] if isinstance(nest, dict) else nest
)
for nest in nests
]
value0 = values[0]
if is_dict:
key = str(index) if isinstance(nest, (tuple, list)) else val
else:
key = (
str(index) if isinstance(nest, (tuple, list)) else list(nest)[index]
)
this_index_chain = key if index_chain == "" else f"{index_chain}/{key}"
ret = ivy.nested_multi_map(
func,
values,
index_chains,
to_apply,
prune_unapplied,
this_index_chain,
config,
to_ivy,
)
if ret is not None:
if to_ivy and isinstance(nest, (ivy.Array, ivy.NativeArray)):
ret = ivy.array(ivy.to_list(ret))
(
return_nest.append(ret)
if isinstance(return_nest, (list))
else return_nest.update(
{val if is_dict else list(nest)[index]: ret}
)
)
else:
values = nests
value0 = values[0]
this_index_chain = index_chain
def _found_in_index_chains(this_index_chain, index_chains):
if index_chains is None:
return False
for index_chain in index_chains:
if this_index_chain.startswith(index_chain):
return True
return False
if index_chains is not None:
found = _found_in_index_chains(this_index_chain, index_chains)
if (found and not to_apply) or (not found and to_apply):
if prune_unapplied:
return return_nest
if ivy.is_array(value0):
if not to_ivy:
value0 = ivy.array(value0)
(
return_nest.append(value0)
if isinstance(return_nest, list)
else (
return_nest.update({this_index_chain: value0})
if isinstance(return_nest, dict)
else return_nest
)
)
return (
tuple(return_nest)
if isinstance(nest, tuple)
else (
ivy.Container(return_nest)
if ivy.is_ivy_container(nest)
else return_nest
)
)
ret = func(values, this_index_chain)
if to_ivy and ret is not None:
if isinstance(nest, (ivy.Array, ivy.NativeArray)):
return ret
else:
return ivy.array(ret)
else:
return ret
if prune_unapplied and len(return_nest) == 0:
return None
return (
tuple(return_nest)
if isinstance(nest0, tuple)
else ivy.Container(return_nest) if ivy.is_ivy_container(nest0) else return_nest
)
@handle_exceptions
def duplicate_array_index_chains(nest: Union[ivy.Array, ivy.NativeArray, Iterable]):
"""Group all unique index chains in a nest. This function is useful for
finding all unique index chains in a nest, and then duplicating the values
at those index chains for functional frameworks.
Parameters
----------
nest
nest to get duplicate index chains for.
Returns
-------
list of index chains to duplicate.
"""
all_index_chains = ivy.nested_argwhere(nest, lambda _: True)
duplicates = []
duplicate_index_chains = {}
for index_chain in all_index_chains:
val = ivy.index_nest(nest, index_chain)
if ivy.is_array(val):
for i in range(len(duplicates)):
if val is duplicates[i]:
duplicate_index_chains[i].append(index_chain)
break
else:
duplicates.append(val)
duplicate_index_chains[len(duplicates) - 1] = [index_chain]
return list(duplicate_index_chains.values())
def prune_empty(nest):
"""Prune empty nests from a nest.
Parameters
----------
nest
nest to prune.
Returns
-------
pruned nest with all empty nests removed
"""
valid = False
if isinstance(nest, dict):
keys = list(nest)
for k in keys:
nest[k] = prune_empty(nest[k])
if nest[k] is not None:
valid = True
for k in keys:
if nest[k] is None:
del nest[k]
elif isinstance(nest, (list, tuple)):
nest = list(nest)
for i in range(len(nest)):
nest[i] = prune_empty(nest[i])
if nest[i] is not None:
valid = True
for i in range(len(nest) - 1, -1, -1):
if nest[i] is None:
del nest[i]
if not valid and not ivy.is_array(nest) and not isinstance(nest, (int, float, str)):
return None
return nest
| ivy/ivy/functional/ivy/nest.py/0 | {
"file_path": "ivy/ivy/functional/ivy/nest.py",
"repo_id": "ivy",
"token_count": 22927
} | 48 |
"""Collection of Ivy normalization classes."""
# local
import ivy
from ivy.stateful.module import Module
from ivy.stateful.initializers import Zeros, Ones
class LayerNorm(Module):
def __init__(
self,
normalized_shape,
/,
*,
eps: float = 1e-05,
elementwise_affine: bool = True,
new_std: float = 1.0,
device=None,
v=None,
dtype=None,
):
"""Class for applying Layer Normalization over a mini-batch of inputs.
Parameters
----------
normalized_shape
Trailing shape to applying the normalization to.
epsilon
small constant to add to the denominator,
use global ivy.min_base by default.
elementwise_affine
Whether to include learnable affine parameters, default is ``True``.
new_std
The standard deviation of the new normalized values. Default is 1.
device
device on which to create the layer's variables 'cuda:0', 'cuda:1', 'cpu'
etc. (Default value = None)
v
the variables for each submodule in the sequence,
constructed internally by default.
"""
if isinstance(normalized_shape, int):
normalized_shape = (normalized_shape,)
self._normalized_idxs = [-(i + 1) for i in range(len(normalized_shape))]
self._epsilon = eps
self._elementwise_affine = elementwise_affine
self._new_std = new_std
self._weight_shape = normalized_shape
self._bias_shape = normalized_shape
self._weight_init = Ones()
self._bias_init = Zeros()
Module.__init__(self, device=device, v=v, dtype=dtype)
def _create_variables(self, device=None, dtype=None):
"""Create internal variables for the layer."""
device = ivy.default(device, self.device)
dtype = ivy.default(dtype, self.dtype)
if self._elementwise_affine:
return {
"weight": self._weight_init.create_variables(
self._weight_shape, device, dtype=dtype
),
"bias": self._bias_init.create_variables(
self._bias_shape, device, dtype=dtype
),
}
return {}
def _forward(self, inputs):
"""Perform forward pass of the LayerNorm layer.
Parameters
----------
inputs
Inputs to process.
Returns
-------
ret
The outputs following the layer normalization operation.
"""
return ivy.layer_norm(
inputs,
self._normalized_idxs,
eps=self._epsilon,
scale=self.v.weight if self._elementwise_affine else None,
offset=self.v.bias if self._elementwise_affine else None,
new_std=self._new_std,
)
def _extra_repr(self) -> str:
return (
f"normalized_idxs={self._normalized_idxs}, epsilon={self._epsilon}, "
f"elementwise_affine={self._elementwise_affine}, new_std={self._new_std}"
)
class BatchNorm2D(Module):
def __init__(
self,
num_features,
/,
*,
eps: float = 1e-5,
momentum: float = 0.1,
data_format: str = "NSC",
affine: bool = True,
track_running_stats: bool = True,
device=None,
v=None,
dtype=None,
training=True,
):
"""Class for applying Layer Normalization over a mini-batch of inputs.
Parameters
----------
num_features
Trailing shape to applying the normalization to.
epsilon
small constant to add to the denominator,
use global ivy.min_base by default.
data_format
The ordering of the dimensions in the input, one of "NSC" or "NCS",
where N is the batch dimension, S represents any number of spatial
dimensions and C is the channel dimension. Default is "NSC".
affine
Whether to include learnable affine parameters, default is ``True``.
track_running_stats
is a boolean flag that determines whether
the running statistics should be updated
during training in batch normalization.
momentum
The value used for the running_mean and running_var computation.
Default is 0.1.
device
device on which to create the layer's variables 'cuda:0', 'cuda:1', 'cpu'
etc. (Default value = None)
v
the variables for each submodule in the sequence,
constructed internally by default.
training
If true, calculate and use the mean and variance of `x`. Otherwise, use the
internal `mean` and `variance` when affine is True.
"""
self.num_features = num_features
self._affine = affine
self.data_format = data_format
self._epsilon = eps
self._momentum = momentum
self._track_running_stats = track_running_stats
self._weight_shape = num_features
self._bias_shape = num_features
self._running_mean_shape = num_features
self._running_var_shape = num_features
self._weight_init = Ones()
self._bias_init = Zeros()
self._running_mean_init = Zeros()
self._running_var_init = Ones()
Module.__init__(self, device=device, v=v, dtype=dtype, training=training)
def _create_variables(self, device=None, dtype=None):
"""Create internal variables for the layer."""
device = ivy.default(device, self.device)
dtype = ivy.default(dtype, self.dtype)
if self._affine:
return {
"b": self._bias_init.create_variables(
self._bias_shape, device, dtype=dtype
),
"running_mean": self._running_mean_init.create_variables(
self._running_mean_shape, device, dtype=dtype
),
"running_var": self._running_var_init.create_variables(
self._running_var_shape, device, dtype=dtype
),
"w": self._weight_init.create_variables(
self._weight_shape, device, dtype=dtype
),
}
return {}
def _forward(self, inputs):
"""Perform forward pass of the BatchNorm layer.
Parameters
----------
inputs
Inputs to process of shape N,C,*.
Returns
-------
ret
The outputs following the batch normalization operation.
"""
normalized, running_mean, running_var = ivy.batch_norm(
inputs,
self.v.running_mean,
self.v.running_var,
eps=self._epsilon,
momentum=self._momentum,
data_format=self.data_format,
training=self.training,
scale=self.v.w if self._affine else None,
offset=self.v.b if self._affine else None,
)
if self._track_running_stats and self.training:
self.v.running_mean = running_mean
self.v.running_var = running_var
return normalized
def _extra_repr(self) -> str:
return (
f"num_features={self._num_features}, affine={self._affine}, "
f"data_format={self._data_format}, epsilon={self._epsilon} "
f"momentum={self._momentum}, "
f"track_running_stats={self._track_running_stats}"
)
| ivy/ivy/stateful/norms.py/0 | {
"file_path": "ivy/ivy/stateful/norms.py",
"repo_id": "ivy",
"token_count": 3584
} | 49 |
import logging
logging_modes = ["DEBUG", "INFO", "WARNING", "ERROR"]
# Set up the initial logging mode
logging.basicConfig(level=logging.WARNING)
logging_mode_stack = [logging.WARNING]
def set_logging_mode(mode):
"""Set the current logging mode for Ivy.
Possible modes are 'DEBUG', 'INFO', 'WARNING', 'ERROR'.
"""
assert mode in logging_modes, "Invalid logging mode. Choose from: " + ", ".join(
logging_modes
)
# Update the logging level
logging.getLogger().setLevel(mode)
logging_mode_stack.append(mode)
def unset_logging_mode():
"""Remove the most recently set logging mode, returning to the previous
one."""
if len(logging_mode_stack) > 1:
# Remove the current mode
logging_mode_stack.pop()
# Set the previous mode
logging.getLogger().setLevel(logging_mode_stack[-1])
# Expose the functions to the main Ivy package
__all__ = ["set_logging_mode", "unset_logging_mode"]
| ivy/ivy/utils/logging.py/0 | {
"file_path": "ivy/ivy/utils/logging.py",
"repo_id": "ivy",
"token_count": 344
} | 50 |
import ivy
import numpy as np
TOLERANCE_DICT = {
"float16": 1e-2,
"bfloat16": 1e-2,
"float32": 1e-5,
"float64": 1e-5,
None: 1e-5,
}
def assert_all_close(
ret_np,
ret_from_gt_np,
backend: str,
rtol=1e-05,
atol=1e-08,
ground_truth_backend="TensorFlow",
):
"""Match the ret_np and ret_from_gt_np inputs element-by-element to ensure
that they are the same.
Parameters
----------
ret_np
Return from the framework to test. Ivy Container or Numpy Array.
ret_from_gt_np
Return from the ground truth framework. Ivy Container or Numpy Array.
rtol
Relative Tolerance Value.
atol
Absolute Tolerance Value.
ground_truth_backend
Ground Truth Backend Framework.
Returns
-------
None if the test passes, else marks the test as failed.
"""
ret_dtype = str(ret_np.dtype)
ret_from_gt_dtype = str(ret_from_gt_np.dtype).replace("longlong", "int64")
assert ret_dtype == ret_from_gt_dtype, (
f"the ground truth framework {ground_truth_backend} returned a"
f" {ret_from_gt_dtype} datatype while the backend {backend} returned a"
f" {ret_dtype} datatype"
)
# TODO enable
# if ivy.is_ivy_container(ret_np) and ivy.is_ivy_container(ret_from_gt_np):
# ivy.Container.cont_multi_map(assert_all_close, [ret_np, ret_from_gt_np])
# else:
if ret_np.dtype == "bfloat16" or ret_from_gt_np.dtype == "bfloat16":
ret_np = ret_np.astype("float64")
ret_from_gt_np = ret_from_gt_np.astype("float64")
assert np.allclose(
np.nan_to_num(ret_np), np.nan_to_num(ret_from_gt_np), rtol=rtol, atol=atol
), (
f" the results from backend {backend} "
f"and ground truth framework {ground_truth_backend} "
f"do not match\n {ret_np}!={ret_from_gt_np} \n\n"
"The mismatching elements are at `False` indices:\n\n"
f"{ret_np == ret_from_gt_np} \n\n"
)
def assert_same_type_and_shape(values, this_key_chain=None):
x_, y_ = values
for x, y in zip(x_, y_):
if isinstance(x, np.ndarray):
x_d = str(x.dtype).replace("longlong", "int64")
y_d = str(y.dtype).replace("longlong", "int64")
assert (
x.shape == y.shape
), f"returned shape = {x.shape}, ground-truth returned shape = {y.shape}"
assert (
x_d == y_d
), f"returned dtype = {x_d}, ground-truth returned dtype = {y_d}"
def assert_same_type(ret_from_target, ret_from_gt, backend_to_test, gt_backend):
"""Assert that the return types from the target and ground truth frameworks
are the same.
checks with a string comparison because with_backend returns
different objects. Doesn't check recursively.
"""
def _assert_same_type(x, y):
assert_msg = (
f"ground truth backend ({gt_backend}) returned"
f" {type(y)} but target backend ({backend_to_test}) returned"
f" {type(x)}"
)
assert str(type(x)) == str(type(y)), assert_msg
ivy.nested_multi_map(
lambda x, _: _assert_same_type(x[0], x[1]), [ret_from_target, ret_from_gt]
)
def value_test(
*,
ret_np_flat,
ret_np_from_gt_flat,
rtol=None,
atol=1e-6,
specific_tolerance_dict=None,
backend: str,
ground_truth_backend="TensorFlow",
):
"""Perform a value test for matching the arrays in ret_np_flat and
ret_from_np_gt_flat.
Parameters
----------
ret_np_flat
A list (flattened) containing Numpy arrays. Return from the
framework to test.
ret_np_from_gt_flat
A list (flattened) containing Numpy arrays. Return from the ground
truth framework.
rtol
Relative Tolerance Value.
atol
Absolute Tolerance Value.
specific_tolerance_dict
(Optional) Dictionary of specific rtol and atol values according to the dtype.
ground_truth_backend
Ground Truth Backend Framework.
Returns
-------
None if the value test passes, else marks the test as failed.
"""
assert_same_type_and_shape([ret_np_flat, ret_np_from_gt_flat])
if type(ret_np_flat) != list: # noqa: E721
ret_np_flat = [ret_np_flat]
if type(ret_np_from_gt_flat) != list: # noqa: E721
ret_np_from_gt_flat = [ret_np_from_gt_flat]
assert len(ret_np_flat) == len(ret_np_from_gt_flat), (
f"The length of results from backend {backend} and ground truth framework"
f" {ground_truth_backend} does not match\n\nlen(ret_np_flat) !="
f" len(ret_np_from_gt_flat):\n\nret_np_flat:\n\n{ret_np_flat}\n\n"
f"ret_np_from_gt_flat:\n\n{ret_np_from_gt_flat}"
)
# value tests, iterating through each array in the flattened returns
if specific_tolerance_dict is not None:
for ret_np, ret_np_from_gt in zip(ret_np_flat, ret_np_from_gt_flat):
dtype = str(ret_np_from_gt.dtype)
if specific_tolerance_dict.get(dtype) is not None:
rtol = specific_tolerance_dict.get(dtype)
else:
rtol = TOLERANCE_DICT.get(dtype, 1e-03) if rtol is None else rtol
assert_all_close(
ret_np,
ret_np_from_gt,
backend=backend,
rtol=rtol,
atol=atol,
ground_truth_backend=ground_truth_backend,
)
elif rtol is not None:
for ret_np, ret_np_from_gt in zip(ret_np_flat, ret_np_from_gt_flat):
assert_all_close(
ret_np,
ret_np_from_gt,
backend=backend,
rtol=rtol,
atol=atol,
ground_truth_backend=ground_truth_backend,
)
else:
for ret_np, ret_np_from_gt in zip(ret_np_flat, ret_np_from_gt_flat):
rtol = TOLERANCE_DICT.get(str(ret_np_from_gt.dtype), 1e-03)
assert_all_close(
ret_np,
ret_np_from_gt,
backend=backend,
rtol=rtol,
atol=atol,
ground_truth_backend=ground_truth_backend,
)
def check_unsupported_dtype(*, fn, input_dtypes, all_as_kwargs_np):
"""Check whether a function does not support the input data types or the
output data type.
Parameters
----------
fn
The function to check.
input_dtypes
data-types of the input arguments and keyword-arguments.
all_as_kwargs_np
All arguments in Numpy Format, to check for the presence of dtype argument.
Returns
-------
True if the function does not support the given input or output data types, False
otherwise.
"""
test_unsupported = False
unsupported_dtypes_fn = ivy.function_unsupported_dtypes(fn)
supported_dtypes_fn = ivy.function_supported_dtypes(fn)
if unsupported_dtypes_fn:
for d in input_dtypes:
if d in unsupported_dtypes_fn:
test_unsupported = True
break
if (
"dtype" in all_as_kwargs_np
and all_as_kwargs_np["dtype"] in unsupported_dtypes_fn
):
test_unsupported = True
if supported_dtypes_fn and not test_unsupported:
for d in input_dtypes:
if d not in supported_dtypes_fn:
test_unsupported = True
break
if (
"dtype" in all_as_kwargs_np
and all_as_kwargs_np["dtype"] not in supported_dtypes_fn
):
test_unsupported = True
return test_unsupported
def check_unsupported_device(*, fn, input_device, all_as_kwargs_np):
"""Check whether a function does not support a given device.
Parameters
----------
fn
The function to check.
input_device
The backend device.
all_as_kwargs_np
All arguments in Numpy Format, to check for the presence of dtype argument.
Returns
-------
True if the function does not support the given device, False otherwise.
"""
test_unsupported = False
unsupported_devices_fn = ivy.function_unsupported_devices(fn)
supported_devices_fn = ivy.function_supported_devices(fn)
if unsupported_devices_fn:
if input_device in unsupported_devices_fn:
test_unsupported = True
if (
"device" in all_as_kwargs_np
and all_as_kwargs_np["device"] in unsupported_devices_fn
):
test_unsupported = True
if supported_devices_fn and not test_unsupported:
if input_device not in supported_devices_fn:
test_unsupported = True
if (
"device" in all_as_kwargs_np
and all_as_kwargs_np["device"] not in supported_devices_fn
):
test_unsupported = True
return test_unsupported
def check_unsupported_device_and_dtype(*, fn, device, input_dtypes, all_as_kwargs_np):
"""Check whether a function does not support a given device or data types.
Parameters
----------
fn
The function to check.
device
The backend device to check.
input_dtypes
data-types of the input arguments and keyword-arguments.
all_as_kwargs_np
All arguments in Numpy Format, to check for the presence of dtype argument.
Returns
-------
True if the function does not support both the device and any data type, False
otherwise.
"""
unsupported_devices_dtypes_fn = ivy.function_unsupported_devices_and_dtypes(fn)
if device in unsupported_devices_dtypes_fn:
for d in input_dtypes:
if d in unsupported_devices_dtypes_fn[device]:
return True
if "device" in all_as_kwargs_np and "dtype" in all_as_kwargs_np:
dev = all_as_kwargs_np["device"]
dtype = all_as_kwargs_np["dtype"]
if dtype in unsupported_devices_dtypes_fn.get(dev, []):
return True
return False
def test_unsupported_function(*, fn, args, kwargs):
"""Test a function with an unsupported datatype to raise an exception.
Parameters
----------
fn
callable function to test.
args
arguments to the function.
kwargs
keyword-arguments to the function.
"""
try:
fn(*args, **kwargs)
assert False
except: # noqa
return
| ivy/ivy_tests/test_ivy/helpers/assertions.py/0 | {
"file_path": "ivy/ivy_tests/test_ivy/helpers/assertions.py",
"repo_id": "ivy",
"token_count": 4764
} | 51 |
from .base import FrontendConfigWithBackend
def get_config():
return JaxFrontendConfig()
class JaxFrontendConfig(FrontendConfigWithBackend):
backend_str = "jax"
| ivy/ivy_tests/test_ivy/test_frontends/config/jax.py/0 | {
"file_path": "ivy/ivy_tests/test_ivy/test_frontends/config/jax.py",
"repo_id": "ivy",
"token_count": 57
} | 52 |
# local
import ivy
import jax
from ivy_tests.test_ivy.helpers import handle_frontend_test
from ivy.functional.frontends.jax._src.tree_util import tree_leaves, tree_map
import hypothesis.strategies as st
# --- Helpers --- #
# --------------- #
@st.composite
def _tree_dict_strategy(draw):
return draw(tree_strategy())
# --- Main --- #
# ------------ #
def leaf_strategy():
return st.lists(st.integers(1, 10)).map(ivy.array)
# tree_leaves
@handle_frontend_test(
fn_tree="jax._src.tree_util.tree_leaves",
tree=_tree_dict_strategy(),
)
def test_jax_tree_leaves(
*,
tree,
test_flags,
fn_tree,
frontend,
on_device,
backend_fw,
):
ivy.set_backend(backend_fw)
# Apply the tree_leaves function to obtain the leaves of the tree
result = tree_leaves(tree)
# compute the expected result
expected = jax.tree_util.tree_leaves(tree)
# value test
assert result == expected
ivy.previous_backend()
# tree_map
@handle_frontend_test(
fn_tree="jax._src.tree_util.tree_map",
tree=_tree_dict_strategy(),
)
def test_jax_tree_map(
*,
tree,
test_flags,
fn_tree,
frontend,
on_device,
backend_fw,
):
ivy.set_backend(backend_fw)
# Define a function to square each leaf node
def square(x):
if isinstance(x, ivy.Array):
return ivy.square(x)
else:
return x**2
# Apply the square function to the tree using tree_map
result = tree_map(square, tree)
# compute the expected result
expected = ivy.square(ivy.Container(tree))
assert ivy.equal(ivy.Container(result), expected)
ivy.previous_backend()
def tree_strategy(max_depth=2):
if max_depth == 0:
return leaf_strategy()
else:
return st.dictionaries(
keys=st.one_of(
*[
st.text(
alphabet=st.characters(min_codepoint=97, max_codepoint=122),
min_size=1,
max_size=1,
).filter(lambda x: x not in used_keys)
for used_keys in [set()]
]
),
values=st.one_of(leaf_strategy(), tree_strategy(max_depth - 1)),
min_size=1,
max_size=10,
)
| ivy/ivy_tests/test_ivy/test_frontends/test_jax/test__src/test_tree_util.py/0 | {
"file_path": "ivy/ivy_tests/test_ivy/test_frontends/test_jax/test__src/test_tree_util.py",
"repo_id": "ivy",
"token_count": 1095
} | 53 |
# global
from hypothesis import strategies as st, assume
import numpy as np
import ivy
from jax.numpy import tril, triu, r_, c_
# local
import ivy_tests.test_ivy.helpers as helpers
from ivy_tests.test_ivy.helpers import handle_frontend_test, BackendHandler
from ...test_numpy.test_indexing_routines.test_inserting_data_into_arrays import (
_helper_r_,
_helper_c_,
)
import ivy.functional.frontends.jax.numpy as jnp_frontend
# --- Helpers --- #
# --------------- #
# diag
@st.composite
def _diag_helper(draw):
dtype, x = draw(
helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("numeric"),
small_abs_safety_factor=2,
large_abs_safety_factor=2,
safety_factor_scale="log",
min_num_dims=1,
max_num_dims=2,
min_dim_size=1,
max_dim_size=50,
)
)
shape = x[0].shape
if len(shape) == 2:
k = draw(helpers.ints(min_value=-shape[0] + 1, max_value=shape[1] - 1))
else:
k = draw(helpers.ints(min_value=0, max_value=shape[0]))
return dtype, x, k
@st.composite
def _get_dtype_square_x(draw):
dim_size = draw(helpers.ints(min_value=2, max_value=5))
dtype_x = draw(
helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("numeric"), shape=(dim_size, dim_size)
)
)
return dtype_x
# unravel_index
@st.composite
def max_value_as_shape_prod(draw):
shape = draw(
helpers.get_shape(
min_num_dims=1,
max_num_dims=5,
min_dim_size=1,
max_dim_size=5,
)
)
dtype_and_x = draw(
helpers.dtype_values_axis(
available_dtypes=["int32", "int64"],
min_value=0,
max_value=np.prod(shape) - 1,
)
)
return dtype_and_x, shape
# --- Main --- #
# ------------ #
# choose
@handle_frontend_test(
fn_tree="jax.numpy.choose",
dtype_x_indices_axis=helpers.array_indices_axis(
array_dtypes=helpers.get_dtypes("numeric"),
indices_dtypes=["int32", "int64"],
),
out=st.none(),
mode=st.sampled_from(["wrap", "clip", "raise"]),
test_with_out=st.just(False),
)
def test_jax_choose(
*,
dtype_x_indices_axis,
out,
mode,
test_flags,
frontend,
backend_fw,
fn_tree,
on_device,
):
dtypes, x, indices, axis, _ = dtype_x_indices_axis
choices = ivy.array(
[np.random.randint(0, 10, size=x.shape) for _ in range(len(dtypes))]
)
helpers.test_frontend_function(
input_dtypes=dtypes,
backend_to_test=backend_fw,
test_flags=test_flags,
frontend=frontend,
fn_tree=fn_tree,
on_device=on_device,
arr=x,
choices=choices,
out=out,
mode=mode,
)
@handle_frontend_test(
fn_tree="jax.numpy.diag",
dtype_x_k=_diag_helper(),
test_with_out=st.just(False),
)
def test_jax_diag(
*,
dtype_x_k,
test_flags,
on_device,
fn_tree,
frontend,
backend_fw,
):
dtype, x, k = dtype_x_k
helpers.test_frontend_function(
input_dtypes=dtype,
backend_to_test=backend_fw,
test_flags=test_flags,
frontend=frontend,
fn_tree=fn_tree,
on_device=on_device,
v=x[0],
k=k,
)
@handle_frontend_test(
fn_tree="jax.numpy.diag_indices",
n=helpers.ints(min_value=1, max_value=10),
ndim=helpers.ints(min_value=2, max_value=10),
dtype=helpers.get_dtypes("valid", full=False),
test_with_out=st.just(False),
)
def test_jax_diag_indices(
n,
ndim,
dtype,
test_flags,
frontend,
backend_fw,
fn_tree,
on_device,
):
helpers.test_frontend_function(
input_dtypes=dtype,
backend_to_test=backend_fw,
test_flags=test_flags,
frontend=frontend,
fn_tree=fn_tree,
on_device=on_device,
n=n,
ndim=ndim,
)
@handle_frontend_test(
dtype_x=_get_dtype_square_x(),
fn_tree="jax.numpy.diag_indices_from",
test_with_out=st.just(False),
)
def test_jax_diag_indices_from(
dtype_x,
test_flags,
frontend,
backend_fw,
fn_tree,
on_device,
):
dtype, x = dtype_x
helpers.test_frontend_function(
input_dtypes=dtype,
backend_to_test=backend_fw,
test_flags=test_flags,
frontend=frontend,
fn_tree=fn_tree,
on_device=on_device,
arr=x[0],
)
@handle_frontend_test(
fn_tree="jax.numpy.diagonal",
dtype_and_values=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("float"),
shape=st.shared(helpers.get_shape(min_num_dims=2), key="shape"),
),
dims_and_offset=helpers.dims_and_offset(
shape=st.shared(helpers.get_shape(min_num_dims=2), key="shape")
),
)
def test_jax_diagonal(
*,
dtype_and_values,
dims_and_offset,
test_flags,
on_device,
fn_tree,
frontend,
backend_fw,
):
input_dtype, value = dtype_and_values
axis1, axis2, offset = dims_and_offset
a = value[0]
num_of_dims = len(np.shape(a))
assume(axis1 != axis2)
if axis1 < 0:
assume(axis1 + num_of_dims != axis2)
if axis2 < 0:
assume(axis1 != axis2 + num_of_dims)
helpers.test_frontend_function(
input_dtypes=input_dtype,
backend_to_test=backend_fw,
test_flags=test_flags,
frontend=frontend,
fn_tree=fn_tree,
on_device=on_device,
a=a,
offset=offset,
axis1=axis1,
axis2=axis2,
)
@handle_frontend_test(
fn_tree="jax.numpy.mask_indices",
n=helpers.ints(min_value=3, max_value=10),
mask_func=st.sampled_from([triu, tril]),
k=helpers.ints(min_value=-5, max_value=5),
input_dtype=helpers.get_dtypes("numeric"),
test_with_out=st.just(False),
number_positional_args=st.just(2),
)
def test_jax_mask_indices(
n,
mask_func,
k,
input_dtype,
test_flags,
frontend,
backend_fw,
fn_tree,
on_device,
):
helpers.test_frontend_function(
input_dtypes=input_dtype,
backend_to_test=backend_fw,
test_flags=test_flags,
frontend=frontend,
fn_tree=fn_tree,
on_device=on_device,
n=n,
mask_func=mask_func,
k=k,
)
@handle_frontend_test(fn_tree="jax.numpy.c_", inputs=_helper_c_()) # dummy fn_tree
def test_jax_numpy_c_(inputs, backend_fw):
ret_gt = c_.__getitem__(tuple(inputs))
with BackendHandler.update_backend(backend_fw):
ret = jnp_frontend.c_.__getitem__(tuple(inputs))
assert np.allclose(ret.ivy_array, ret_gt)
@handle_frontend_test(
fn_tree="jax.numpy.indices",
dimensions=helpers.get_shape(min_num_dims=1),
dtype=helpers.get_dtypes("numeric"),
sparse=st.booleans(),
test_with_out=st.just(False),
)
def test_jax_numpy_indices(
*,
dimensions,
dtype,
sparse,
test_flags,
frontend,
backend_fw,
fn_tree,
on_device,
):
helpers.test_frontend_function(
input_dtypes=dtype,
backend_to_test=backend_fw,
test_flags=test_flags,
frontend=frontend,
fn_tree=fn_tree,
on_device=on_device,
dimensions=dimensions,
dtype=dtype[0],
sparse=sparse,
)
@handle_frontend_test(fn_tree="jax.numpy.r_", inputs=_helper_r_()) # dummy fn_tree
def test_jax_numpy_r_(inputs, backend_fw):
inputs, *_ = inputs
ret_gt = r_.__getitem__(tuple(inputs))
with BackendHandler.update_backend(backend_fw):
ret = jnp_frontend.r_.__getitem__(tuple(inputs))
assert np.allclose(ret.ivy_array, ret_gt)
# take_along_axis
@handle_frontend_test(
fn_tree="jax.numpy.take_along_axis",
dtype_x_indices_axis=helpers.array_indices_axis(
array_dtypes=helpers.get_dtypes("numeric"),
indices_dtypes=["int32", "int64"],
min_num_dims=1,
max_num_dims=5,
min_dim_size=1,
max_dim_size=10,
indices_same_dims=True,
valid_bounds=False,
),
mode=st.sampled_from(["clip", "fill", "drop"]),
test_with_out=st.just(False),
)
def test_jax_take_along_axis(
*,
dtype_x_indices_axis,
mode,
test_flags,
frontend,
backend_fw,
fn_tree,
on_device,
):
dtypes, x, indices, axis, _ = dtype_x_indices_axis
helpers.test_frontend_function(
input_dtypes=dtypes,
backend_to_test=backend_fw,
test_flags=test_flags,
frontend=frontend,
fn_tree=fn_tree,
on_device=on_device,
arr=x,
indices=indices,
axis=axis,
mode=mode,
)
# Tril_indices
@handle_frontend_test(
fn_tree="jax.numpy.tril_indices",
n_rows=helpers.ints(min_value=1, max_value=10),
k=helpers.ints(min_value=2, max_value=10),
dtype=helpers.get_dtypes("valid", full=False),
test_with_out=st.just(False),
)
def test_jax_tril_indices(
n_rows,
k,
dtype,
test_flags,
frontend,
backend_fw,
fn_tree,
on_device,
):
helpers.test_frontend_function(
input_dtypes=dtype,
backend_to_test=backend_fw,
test_flags=test_flags,
frontend=frontend,
fn_tree=fn_tree,
on_device=on_device,
n=n_rows,
k=k,
)
# tril_indices_from
@handle_frontend_test(
fn_tree="jax.numpy.tril_indices_from",
dtype_and_x=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("numeric"),
num_arrays=1,
min_num_dims=2,
max_num_dims=5,
),
k=helpers.ints(min_value=-5, max_value=5),
test_with_out=st.just(False),
)
def test_jax_tril_indices_from(
dtype_and_x,
k,
test_flags,
frontend,
backend_fw,
fn_tree,
on_device,
):
dtype, x = dtype_and_x
helpers.test_frontend_function(
input_dtypes=dtype,
backend_to_test=backend_fw,
test_flags=test_flags,
frontend=frontend,
fn_tree=fn_tree,
on_device=on_device,
arr=x[0],
k=k,
)
# triu_indices
@handle_frontend_test(
fn_tree="jax.numpy.triu_indices",
n=helpers.ints(min_value=2, max_value=10),
k=helpers.ints(min_value=-10, max_value=10),
input_dtypes=helpers.get_dtypes("valid", full=False),
test_with_out=st.just(False),
)
def test_jax_triu_indices(
n,
k,
input_dtypes,
test_flags,
frontend,
backend_fw,
fn_tree,
on_device,
):
helpers.test_frontend_function(
n=n,
k=k,
input_dtypes=input_dtypes,
backend_to_test=backend_fw,
test_flags=test_flags,
frontend=frontend,
fn_tree=fn_tree,
on_device=on_device,
)
# triu_indices_from
@handle_frontend_test(
fn_tree="jax.numpy.triu_indices_from",
dtype_and_x=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("numeric"),
num_arrays=1,
min_num_dims=2,
max_num_dims=5,
),
k=helpers.ints(min_value=-5, max_value=5),
test_with_out=st.just(False),
)
def test_jax_triu_indices_from(
dtype_and_x,
k,
test_flags,
frontend,
backend_fw,
fn_tree,
on_device,
):
dtype, x = dtype_and_x
helpers.test_frontend_function(
input_dtypes=dtype,
backend_to_test=backend_fw,
test_flags=test_flags,
frontend=frontend,
fn_tree=fn_tree,
on_device=on_device,
arr=x[0],
k=k,
)
@handle_frontend_test(
fn_tree="jax.numpy.unravel_index",
dtype_x_shape=max_value_as_shape_prod(),
test_with_out=st.just(False),
)
def test_jax_unravel_index(
*,
dtype_x_shape,
test_flags,
frontend,
backend_fw,
fn_tree,
on_device,
):
dtype_and_x, shape = dtype_x_shape
input_dtype, x = dtype_and_x[0], dtype_and_x[1]
helpers.test_frontend_function(
input_dtypes=input_dtype,
backend_to_test=backend_fw,
test_flags=test_flags,
frontend=frontend,
fn_tree=fn_tree,
on_device=on_device,
indices=x[0],
shape=shape,
)
| ivy/ivy_tests/test_ivy/test_frontends/test_jax/test_numpy/test_indexing.py/0 | {
"file_path": "ivy/ivy_tests/test_ivy/test_frontends/test_jax/test_numpy/test_indexing.py",
"repo_id": "ivy",
"token_count": 6258
} | 54 |
import pytest
@pytest.fixture(scope="session")
def frontend():
return "numpy"
| ivy/ivy_tests/test_ivy/test_frontends/test_numpy/conftest.py/0 | {
"file_path": "ivy/ivy_tests/test_ivy/test_frontends/test_numpy/conftest.py",
"repo_id": "ivy",
"token_count": 31
} | 55 |
# global
from hypothesis import strategies as st
# local
import ivy_tests.test_ivy.helpers as helpers
from ivy_tests.test_ivy.helpers import handle_frontend_test
from ivy_tests.test_ivy.test_functional.test_experimental.test_nn.test_layers import (
_x_and_ifft,
_x_and_rfftn,
)
@handle_frontend_test(
fn_tree="numpy.fft.fft",
dtype_input_axis=helpers.dtype_values_axis(
available_dtypes=helpers.get_dtypes("float_and_complex"),
shape=(2,),
min_axis=-1,
force_int_axis=True,
),
norm=st.sampled_from(["backward", "ortho", "forward"]),
n=st.integers(min_value=2, max_value=10),
)
def test_numpy_fft(
dtype_input_axis, norm, n, backend_fw, frontend, test_flags, fn_tree, on_device
):
input_dtype, x, axis = dtype_input_axis
helpers.test_frontend_function(
input_dtypes=input_dtype,
frontend=frontend,
backend_to_test=backend_fw,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
test_values=True,
a=x[0],
n=n,
axis=axis,
norm=norm,
)
@handle_frontend_test(
fn_tree="numpy.fft.fftfreq",
n=st.integers(min_value=10, max_value=100),
sample_rate=st.integers(min_value=1, max_value=10),
)
def test_numpy_fftfreq(
n, sample_rate, backend_fw, frontend, test_flags, fn_tree, on_device
):
d = 1 / sample_rate
helpers.test_frontend_function(
input_dtypes=[int],
frontend=frontend,
backend_to_test=backend_fw,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
test_values=True,
n=n,
d=d,
)
@handle_frontend_test(
fn_tree="numpy.fft.fftshift",
dtype_and_x=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("float"), shape=(4,), array_api_dtypes=True
),
)
def test_numpy_fftshift(
dtype_and_x, backend_fw, frontend, test_flags, fn_tree, on_device
):
input_dtype, arr = dtype_and_x
helpers.test_frontend_function(
input_dtypes=input_dtype,
frontend=frontend,
backend_to_test=backend_fw,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
test_values=True,
x=arr[0],
axes=None,
)
# ivy_tests/test_ivy/test_functional/test_experimental/test_nn/test_layers.py
@handle_frontend_test(
fn_tree="numpy.fft.ifft",
dtype_and_x=_x_and_ifft(),
)
def test_numpy_ifft(dtype_and_x, backend_fw, frontend, test_flags, fn_tree, on_device):
input_dtype, x, dim, norm, n = dtype_and_x
helpers.test_frontend_function(
input_dtypes=input_dtype,
frontend=frontend,
backend_to_test=backend_fw,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
test_values=True,
a=x,
n=n,
axis=dim,
norm=norm,
)
@handle_frontend_test(
fn_tree="numpy.fft.ifft2",
dtype_and_x=_x_and_ifft(),
)
def test_numpy_ifft2(dtype_and_x, backend_fw, frontend, test_flags, fn_tree, on_device):
input_dtype, x, dim, norm, n = dtype_and_x
helpers.test_frontend_function(
input_dtypes=input_dtype,
frontend=frontend,
backend_to_test=backend_fw,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
test_values=True,
a=x,
s=None,
axes=None,
norm=norm,
)
@handle_frontend_test(
fn_tree="numpy.fft.ifftn",
dtype_and_x=_x_and_ifft(),
)
def test_numpy_ifftn(dtype_and_x, backend_fw, frontend, test_flags, fn_tree, on_device):
input_dtype, x, dim, norm, n = dtype_and_x
helpers.test_frontend_function(
input_dtypes=input_dtype,
frontend=frontend,
backend_to_test=backend_fw,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
test_values=True,
a=x,
s=None,
axes=None,
norm=norm,
)
@handle_frontend_test(
fn_tree="numpy.fft.ifftshift",
dtype_and_x=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("float"), shape=(4,), array_api_dtypes=True
),
)
def test_numpy_ifftshift(
dtype_and_x, backend_fw, frontend, test_flags, fn_tree, on_device
):
input_dtype, arr = dtype_and_x
helpers.test_frontend_function(
input_dtypes=input_dtype,
frontend=frontend,
backend_to_test=backend_fw,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
test_values=True,
x=arr[0],
axes=None,
)
@handle_frontend_test(
fn_tree="numpy.fft.ihfft",
dtype_input_axis=helpers.dtype_values_axis(
available_dtypes=helpers.get_dtypes("float_and_complex"),
shape=(2,),
min_axis=-1,
force_int_axis=True,
),
norm=st.sampled_from(["backward", "ortho", "forward"]),
n=st.integers(min_value=2, max_value=5),
)
def test_numpy_ihfft(
dtype_input_axis, norm, n, backend_fw, frontend, test_flags, fn_tree, on_device
):
input_dtype, x, axis = dtype_input_axis
helpers.test_frontend_function(
input_dtypes=input_dtype,
frontend=frontend,
backend_to_test=backend_fw,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
test_values=True,
a=x[0],
n=n,
axis=axis,
norm=norm,
)
@handle_frontend_test(
fn_tree="numpy.fft.rfft",
dtype_input_axis=helpers.dtype_values_axis(
available_dtypes=helpers.get_dtypes("float_and_complex"),
shape=(2,),
min_axis=-1,
force_int_axis=True,
),
norm=st.sampled_from(["backward", "ortho", "forward"]),
n=st.integers(min_value=2, max_value=5),
)
def test_numpy_rfft(
dtype_input_axis, norm, n, backend_fw, frontend, test_flags, fn_tree, on_device
):
input_dtype, x, axis = dtype_input_axis
helpers.test_frontend_function(
input_dtypes=input_dtype,
frontend=frontend,
backend_to_test=backend_fw,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
test_values=True,
a=x[0],
n=n,
axis=axis,
norm=norm,
)
@handle_frontend_test(
fn_tree="numpy.fft.rfftfreq",
n=st.integers(min_value=10, max_value=100),
sample_rate=st.integers(min_value=1, max_value=10),
)
def test_numpy_rfftfreq(
n, sample_rate, backend_fw, frontend, test_flags, fn_tree, on_device
):
d = 1 / sample_rate
helpers.test_frontend_function(
input_dtypes=[int],
frontend=frontend,
backend_to_test=backend_fw,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
test_values=True,
n=n,
d=d,
)
@handle_frontend_test(
fn_tree="numpy.fft.rfftn",
dtype_and_x=_x_and_rfftn(),
)
def test_numpy_rfftn(dtype_and_x, frontend, backend_fw, test_flags, fn_tree, on_device):
dtype, x, s, axes, norm = dtype_and_x
helpers.test_frontend_function(
input_dtypes=dtype,
backend_to_test=backend_fw,
frontend=frontend,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
test_values=True,
a=x,
s=s,
axes=axes,
norm=norm,
)
| ivy/ivy_tests/test_ivy/test_frontends/test_numpy/test_fft/test_discrete_fourier_transform.py/0 | {
"file_path": "ivy/ivy_tests/test_ivy/test_frontends/test_numpy/test_fft/test_discrete_fourier_transform.py",
"repo_id": "ivy",
"token_count": 3733
} | 56 |
# global
import numpy as np
from hypothesis import strategies as st
# local
import ivy_tests.test_ivy.helpers as helpers
from ivy_tests.test_ivy.helpers import handle_frontend_test
# --- Helpers --- #
# --------------- #
# isin
@st.composite
def _isin_data_generation_helper(draw):
dtype_and_x = helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("valid"),
num_arrays=2,
shared_dtype=True,
)
return draw(dtype_and_x)
# --- Main --- #
# ------------ #
@handle_frontend_test(
fn_tree="numpy.allclose",
dtype_and_x=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("float"),
num_arrays=2,
shared_dtype=True,
),
equal_nan=st.booleans(),
test_with_out=st.just(False),
)
def test_numpy_allclose(
*,
dtype_and_x,
equal_nan,
on_device,
fn_tree,
frontend,
test_flags,
backend_fw,
):
dtype, x = dtype_and_x
helpers.test_frontend_function(
input_dtypes=dtype,
backend_to_test=backend_fw,
frontend=frontend,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
a=x[0],
b=x[1],
equal_nan=equal_nan,
)
@handle_frontend_test(
fn_tree="numpy.isclose",
dtype_and_x=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("float"),
num_arrays=2,
shared_dtype=True,
),
equal_nan=st.booleans(),
test_with_out=st.just(False),
)
def test_numpy_isclose(
*,
dtype_and_x,
equal_nan,
on_device,
fn_tree,
frontend,
test_flags,
backend_fw,
):
dtype, x = dtype_and_x
helpers.test_frontend_function(
input_dtypes=dtype,
backend_to_test=backend_fw,
frontend=frontend,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
a=x[0],
b=x[1],
equal_nan=equal_nan,
)
# isin
@handle_frontend_test(
fn_tree="numpy.isin",
assume_unique_and_dtype_and_x=_isin_data_generation_helper(),
invert=st.booleans(),
)
def test_numpy_isin(
*,
assume_unique_and_dtype_and_x,
invert,
on_device,
fn_tree,
frontend,
test_flags,
backend_fw,
):
x_and_dtype = assume_unique_and_dtype_and_x
dtypes, values = x_and_dtype
elements, test_elements = values
helpers.test_frontend_function(
input_dtypes=dtypes,
frontend=frontend,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
element=elements,
test_elements=test_elements,
invert=invert,
backend_to_test=backend_fw,
)
@handle_frontend_test(
fn_tree="numpy.isneginf",
dtype_and_x=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("float_and_integer"),
min_value=-np.inf,
max_value=np.inf,
),
test_with_out=st.just(False),
)
def test_numpy_isneginf(
*,
dtype_and_x,
on_device,
fn_tree,
frontend,
test_flags,
backend_fw,
):
dtype, x = dtype_and_x
helpers.test_frontend_function(
input_dtypes=dtype,
backend_to_test=backend_fw,
frontend=frontend,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
x=x[0],
)
@handle_frontend_test(
fn_tree="numpy.isposinf",
dtype_and_x=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("float_and_integer"),
min_value=-np.inf,
max_value=np.inf,
),
test_with_out=st.just(False),
)
def test_numpy_isposinf(
*,
dtype_and_x,
on_device,
fn_tree,
frontend,
test_flags,
backend_fw,
):
dtype, x = dtype_and_x
helpers.test_frontend_function(
input_dtypes=dtype,
backend_to_test=backend_fw,
frontend=frontend,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
x=x[0],
)
| ivy/ivy_tests/test_ivy/test_frontends/test_numpy/test_logic/test_array_contents.py/0 | {
"file_path": "ivy/ivy_tests/test_ivy/test_frontends/test_numpy/test_logic/test_array_contents.py",
"repo_id": "ivy",
"token_count": 2013
} | 57 |
# global
import numpy as np
# local
import ivy.functional.frontends.numpy as np_frontend
import ivy_tests.test_ivy.helpers as helpers
from ivy_tests.test_ivy.helpers import handle_frontend_test, BackendHandler
@handle_frontend_test(
fn_tree="numpy.asmatrix",
arr=helpers.dtype_and_values(min_num_dims=2, max_num_dims=2),
)
def test_numpy_asmatrix(arr, backend_fw):
with BackendHandler.update_backend(backend_fw) as ivy_backend:
dtype, x = arr
ret = np_frontend.asmatrix(x[0])
ret_gt = np.asmatrix(x[0])
assert ret.shape == ret_gt.shape
assert ivy_backend.all(ivy_backend.flatten(ret._data) == np.ravel(ret_gt))
@handle_frontend_test(
fn_tree="numpy.asscalar",
arr=helpers.array_values(dtype=helpers.get_dtypes("numeric"), shape=1),
)
def test_numpy_asscalar(arr: np.ndarray):
ret_1 = arr.item()
ret_2 = np_frontend.asscalar(arr)
assert ret_1 == ret_2
| ivy/ivy_tests/test_ivy/test_frontends/test_numpy/test_manipulation_routines/test_changing_kind_of_array.py/0 | {
"file_path": "ivy/ivy_tests/test_ivy/test_frontends/test_numpy/test_manipulation_routines/test_changing_kind_of_array.py",
"repo_id": "ivy",
"token_count": 410
} | 58 |
# global
from hypothesis import strategies as st
# local
import ivy_tests.test_ivy.helpers as helpers
from ivy_tests.test_ivy.helpers import handle_frontend_test
# sinc
@handle_frontend_test(
fn_tree="numpy.sinc",
dtype_and_x=helpers.dtype_and_values(available_dtypes=helpers.get_dtypes("float")),
test_with_out=st.just(False),
)
def test_numpy_sinc(
dtype_and_x,
frontend,
backend_fw,
test_flags,
fn_tree,
on_device,
):
input_dtype, x = dtype_and_x
helpers.test_frontend_function(
input_dtypes=input_dtype,
frontend=frontend,
backend_to_test=backend_fw,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
x=x[0],
)
| ivy/ivy_tests/test_ivy/test_frontends/test_numpy/test_mathematical_functions/test_other_special_functions.py/0 | {
"file_path": "ivy/ivy_tests/test_ivy/test_frontends/test_numpy/test_mathematical_functions/test_other_special_functions.py",
"repo_id": "ivy",
"token_count": 344
} | 59 |
# global
from hypothesis import strategies as st
# local
import ivy_tests.test_ivy.helpers as helpers
from ivy_tests.test_ivy.helpers import handle_frontend_test
@handle_frontend_test(
fn_tree="numpy.argsort",
dtype_x_axis=helpers.dtype_values_axis(
available_dtypes=helpers.get_dtypes("numeric"),
min_axis=-1,
max_axis=0,
min_num_dims=1,
force_int_axis=True,
),
test_with_out=st.just(False),
)
def test_numpy_argsort(
*,
dtype_x_axis,
frontend,
test_flags,
fn_tree,
backend_fw,
on_device,
):
input_dtype, x, axis = dtype_x_axis
helpers.test_frontend_function(
input_dtypes=input_dtype,
backend_to_test=backend_fw,
frontend=frontend,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
x=x[0],
axis=axis,
)
@handle_frontend_test(
fn_tree="numpy.lexsort",
dtype_x_axis=helpers.dtype_values_axis(
available_dtypes=helpers.get_dtypes("numeric"),
min_axis=-1,
max_axis=0,
min_num_dims=1,
force_int_axis=True,
),
test_with_out=st.just(False),
)
def test_numpy_lexsort(
*,
dtype_x_axis,
frontend,
test_flags,
fn_tree,
backend_fw,
on_device,
):
input_dtype, x, axis = dtype_x_axis
helpers.test_frontend_function(
input_dtypes=input_dtype,
backend_to_test=backend_fw,
frontend=frontend,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
keys=x[0],
axis=axis,
)
@handle_frontend_test(
fn_tree="numpy.msort",
dtype_x_axis=helpers.dtype_values_axis(
available_dtypes=helpers.get_dtypes("float"),
min_num_dims=1,
min_dim_size=1,
min_axis=-1,
max_axis=0,
),
)
def test_numpy_msort(
*,
dtype_x_axis,
frontend,
test_flags,
fn_tree,
backend_fw,
on_device,
):
input_dtype, x, axis = dtype_x_axis
helpers.test_frontend_function(
input_dtypes=input_dtype,
backend_to_test=backend_fw,
frontend=frontend,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
a=x[0],
)
@handle_frontend_test(
fn_tree="numpy.partition",
dtype_x_axis=helpers.array_indices_axis(
array_dtypes=helpers.get_dtypes("numeric"),
indices_dtypes=["int64"],
min_dim_size=1,
max_num_dims=1,
indices_same_dims=False,
disable_random_axis=False,
axis_zero=False,
valid_bounds=True,
),
test_with_out=st.just(False),
)
def test_numpy_partition(
*,
dtype_x_axis,
frontend,
test_flags,
backend_fw,
fn_tree,
on_device,
):
dtypes, x, kth, axis, _ = dtype_x_axis
helpers.test_frontend_function(
input_dtypes=dtypes,
frontend=frontend,
backend_to_test=backend_fw,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
test_values=False,
a=x,
kth=kth,
axis=axis,
)
@handle_frontend_test(
fn_tree="numpy.sort",
dtype_x_axis=helpers.dtype_values_axis(
available_dtypes=helpers.get_dtypes("float"),
min_axis=-1,
max_axis=0,
min_num_dims=1,
force_int_axis=True,
),
test_with_out=st.just(False),
)
def test_numpy_sort(
*,
dtype_x_axis,
frontend,
test_flags,
fn_tree,
backend_fw,
on_device,
):
input_dtype, x, axis = dtype_x_axis
helpers.test_frontend_function(
input_dtypes=input_dtype,
backend_to_test=backend_fw,
frontend=frontend,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
a=x[0],
axis=axis,
)
@handle_frontend_test(
fn_tree="numpy.sort_complex",
dtype_x_axis=helpers.dtype_values_axis(
available_dtypes=helpers.get_dtypes("valid"),
min_num_dims=1,
min_dim_size=1,
min_axis=-1,
max_axis=0,
),
test_with_out=st.just(False),
)
def test_numpy_sort_complex(
*,
dtype_x_axis,
frontend,
test_flags,
fn_tree,
backend_fw,
on_device,
):
input_dtype, x, axis = dtype_x_axis
helpers.test_frontend_function(
input_dtypes=input_dtype,
backend_to_test=backend_fw,
frontend=frontend,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
a=x[0],
test_values=False,
)
| ivy/ivy_tests/test_ivy/test_frontends/test_numpy/test_sorting_searching_counting/test_sorting.py/0 | {
"file_path": "ivy/ivy_tests/test_ivy/test_frontends/test_numpy/test_sorting_searching_counting/test_sorting.py",
"repo_id": "ivy",
"token_count": 2398
} | 60 |
# global
from hypothesis import given, strategies as st
# local
import ivy_tests.test_ivy.helpers as helpers
from ivy_tests.test_ivy.helpers import handle_frontend_test
from ivy_tests.test_ivy.test_functional.test_experimental.test_nn.test_layers import (
_x_and_ifftn,
)
# Custom Hypothesis strategy for generating sequences of 2 integers
def sequence_of_two_integers():
return st.lists(st.integers(), min_size=2, max_size=2)
@handle_frontend_test(
fn_tree="paddle.fft.fft",
dtype_x_axis=helpers.dtype_values_axis(
available_dtypes=helpers.get_dtypes("valid"),
min_value=-10,
max_value=10,
min_num_dims=1,
min_dim_size=2,
valid_axis=True,
force_int_axis=True,
),
n=st.one_of(
st.integers(min_value=2, max_value=10),
st.just(None),
),
norm=st.sampled_from(["backward", "ortho", "forward"]),
)
def test_paddle_fft(
dtype_x_axis,
n,
norm,
frontend,
backend_fw,
test_flags,
fn_tree,
):
input_dtypes, x, axis = dtype_x_axis
helpers.test_frontend_function(
input_dtypes=input_dtypes,
backend_to_test=backend_fw,
frontend=frontend,
test_flags=test_flags,
fn_tree=fn_tree,
x=x[0],
n=n,
axis=axis,
norm=norm,
)
@handle_frontend_test(
fn_tree="paddle.fft.fft2",
dtypes_x_axis=helpers.dtype_values_axis(
available_dtypes=helpers.get_dtypes("valid"),
min_value=-10,
max_value=10,
min_num_dims=2,
min_dim_size=2,
valid_axis=True,
force_int_axis=True,
),
s=st.one_of(
st.none(),
st.lists(st.integers(min_value=2, max_value=10), min_size=2, max_size=2),
),
axes=st.one_of(
st.none(),
st.tuples(
st.integers(min_value=-2, max_value=2),
st.integers(min_value=-1, max_value=2),
),
),
norm=st.sampled_from(["backward", "ortho", "forward"]),
)
def test_paddle_fft2(
dtypes_x_axis,
s,
axes,
norm,
frontend,
backend_fw,
test_flags,
fn_tree,
):
input_dtypes, x, _ = dtypes_x_axis
helpers.test_frontend_function(
input_dtypes=input_dtypes,
backend_to_test=backend_fw,
frontend=frontend,
test_flags=test_flags,
fn_tree=fn_tree,
x=x[0],
s=s,
axes=axes,
norm=norm,
)
@handle_frontend_test(
fn_tree="paddle.fft.fftfreq",
n=st.integers(min_value=1, max_value=1000),
sample_rate=st.integers(min_value=1, max_value=20),
dtypes=helpers.get_dtypes("valid"),
)
def test_paddle_fftfreq(
n,
sample_rate,
dtypes,
frontend,
test_flags,
fn_tree,
on_device,
backend_fw,
):
d = 1 / sample_rate
helpers.test_frontend_function(
input_dtypes=dtypes,
backend_to_test=backend_fw,
frontend=frontend,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
test_values=True,
n=n,
d=d,
)
@handle_frontend_test(
fn_tree="paddle.fft.fftshift",
dtype_x_axis=helpers.dtype_values_axis(
available_dtypes=helpers.get_dtypes("valid"),
min_value=-10,
max_value=10,
min_num_dims=1,
valid_axis=True,
force_int_axis=True,
),
)
def test_paddle_fftshift(
dtype_x_axis, frontend, test_flags, fn_tree, on_device, backend_fw
):
input_dtype, x, axes = dtype_x_axis
helpers.test_frontend_function(
input_dtypes=input_dtype,
backend_to_test=backend_fw,
frontend=frontend,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
test_values=True,
x=x[0],
axes=axes,
)
@handle_frontend_test(
fn_tree="paddle.fft.hfft",
dtype_x_axis=helpers.dtype_values_axis(
available_dtypes=helpers.get_dtypes("complex"),
min_value=-10,
max_value=10,
min_num_dims=1,
valid_axis=True,
force_int_axis=True,
),
)
def test_paddle_hfft(
dtype_x_axis,
frontend,
test_flags,
fn_tree,
on_device,
backend_fw,
):
input_dtype, x, axes = dtype_x_axis
helpers.test_frontend_function(
input_dtypes=input_dtype,
backend_to_test=backend_fw,
frontend=frontend,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
test_values=True,
x=x[0],
axes=axes,
)
@given(
s=st.one_of(
st.none(), st.tuples(st.integers(min_value=1), st.integers(min_value=1))
),
axis=st.one_of(st.none(), st.tuples(st.integers(min_value=-2, max_value=-1))),
shape=st.lists(st.integers(min_value=1, max_value=10), min_size=2, max_size=2).map(
tuple
),
)
@handle_frontend_test(
fn_tree="paddle.fft.hfft2",
dtype_x_axis=helpers.dtype_values_axis(
available_dtypes=helpers.get_dtypes("complex64"),
),
)
def test_paddle_hfft2(
dtype_x_axis,
s,
axis,
norm,
frontend,
backend_fw,
test_flags,
fn_tree,
shape,
):
input_dtypes, x, axis = dtype_x_axis
x = x.reshape(shape) # reshape x to the generated shape
for norm in ["backward", "forward", "ortho"]:
helpers.test_frontend_function(
input_dtypes=input_dtypes,
frontend=frontend,
backend_to_test=backend_fw,
test_flags=test_flags,
fn_tree=fn_tree,
x=x,
s=s,
axis=axis,
norm=norm,
)
@handle_frontend_test(
fn_tree="paddle.fft.ifft",
dtype_x_axis=helpers.dtype_values_axis(
available_dtypes=helpers.get_dtypes("valid"),
min_value=-10,
max_value=10,
min_num_dims=1,
min_dim_size=2,
valid_axis=True,
force_int_axis=True,
),
n=st.one_of(
st.integers(min_value=2, max_value=10),
st.just(None),
),
norm=st.sampled_from(["backward", "ortho", "forward"]),
)
def test_paddle_ifft(
dtype_x_axis,
n,
norm,
frontend,
backend_fw,
test_flags,
fn_tree,
):
input_dtypes, x, axis = dtype_x_axis
helpers.test_frontend_function(
input_dtypes=input_dtypes,
frontend=frontend,
backend_to_test=backend_fw,
test_flags=test_flags,
fn_tree=fn_tree,
x=x[0],
n=n,
axis=axis,
norm=norm,
)
# ifftn
@handle_frontend_test(
fn_tree="paddle.fft.ifftn",
dtype_and_x=_x_and_ifftn(),
)
def test_paddle_ifftn(
dtype_and_x,
frontend,
backend_fw,
test_flags,
fn_tree,
):
dtype, x, s, axes, norm = dtype_and_x
helpers.test_frontend_function(
input_dtypes=dtype,
frontend=frontend,
backend_to_test=backend_fw,
test_flags=test_flags,
fn_tree=fn_tree,
x=x,
s=s,
axes=axes,
norm=norm,
)
@handle_frontend_test(
fn_tree="paddle.fft.ifftshift",
dtype_x_axis=helpers.dtype_values_axis(
available_dtypes=helpers.get_dtypes("valid"),
min_value=-10,
max_value=10,
min_num_dims=1,
valid_axis=True,
force_int_axis=True,
),
)
def test_paddle_ifftshift(
dtype_x_axis,
frontend,
test_flags,
fn_tree,
on_device,
backend_fw,
):
input_dtype, x, axes = dtype_x_axis
helpers.test_frontend_function(
input_dtypes=input_dtype,
backend_to_test=backend_fw,
frontend=frontend,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
test_values=True,
x=x[0],
axes=axes,
)
@handle_frontend_test(
fn_tree="paddle.fft.ihfft2",
dtype_x_axis=helpers.dtype_values_axis(
available_dtypes=["float64", "float32", "int64", "int32"],
min_value=-10,
max_value=10,
min_num_dims=2,
max_num_dims=2,
shape=st.tuples(
st.integers(min_value=2, max_value=10),
st.integers(min_value=2, max_value=10),
),
),
s=st.one_of(
st.lists(st.integers(min_value=2, max_value=10), min_size=2, max_size=2),
),
axes=st.just([-2, -1]),
norm=st.sampled_from(["backward", "ortho", "forward"]),
)
def test_paddle_ihfft2(
dtype_x_axis,
s,
axes,
norm,
frontend,
backend_fw,
test_flags,
fn_tree,
):
input_dtypes, x, axis_ = dtype_x_axis
helpers.test_frontend_function(
input_dtypes=input_dtypes,
frontend=frontend,
backend_to_test=backend_fw,
test_flags=test_flags,
fn_tree=fn_tree,
x=x[0],
s=s,
axes=axes,
norm=norm,
)
@handle_frontend_test(
fn_tree="paddle.fft.ihfftn",
dtype_x_axis=helpers.dtype_values_axis(
available_dtypes=["float64", "float32", "int64", "int32"],
min_value=-10,
max_value=10,
min_num_dims=2,
max_num_dims=2,
shape=st.tuples(
st.integers(min_value=2, max_value=10),
st.integers(min_value=2, max_value=10),
),
),
s=st.one_of(
st.lists(st.integers(min_value=2, max_value=10), min_size=2, max_size=2),
),
axes=st.just([-2, -1]),
norm=st.sampled_from(["backward", "ortho", "forward"]),
)
def test_paddle_ihfftn(
dtype_x_axis,
s,
axes,
norm,
frontend,
backend_fw,
test_flags,
fn_tree,
):
input_dtypes, x, axis_ = dtype_x_axis
helpers.test_frontend_function(
input_dtypes=input_dtypes,
frontend=frontend,
backend_to_test=backend_fw,
test_flags=test_flags,
fn_tree=fn_tree,
x=x[0],
s=s,
axes=axes,
norm=norm,
)
@handle_frontend_test(
fn_tree="paddle.fft.irfft",
dtype_x_axis=helpers.dtype_values_axis(
available_dtypes=helpers.get_dtypes("valid"),
min_value=-10,
max_value=10,
min_num_dims=1,
min_dim_size=2,
valid_axis=True,
force_int_axis=True,
),
n=st.one_of(
st.integers(min_value=2, max_value=10),
st.just(None),
),
norm=st.sampled_from(["backward", "ortho", "forward"]),
)
def test_paddle_irfft(
dtype_x_axis,
n,
norm,
frontend,
test_flags,
fn_tree,
backend_fw,
):
input_dtypes, x, axis = dtype_x_axis
helpers.test_frontend_function(
input_dtypes=input_dtypes,
backend_to_test=backend_fw,
frontend=frontend,
test_flags=test_flags,
fn_tree=fn_tree,
x=x[0],
n=n,
axis=axis,
norm=norm,
)
@handle_frontend_test(
fn_tree="paddle.fft.irfft2",
dtype_x_axis=helpers.dtype_values_axis(
available_dtypes=helpers.get_dtypes("valid"),
min_value=-10,
max_value=10,
min_num_dims=2,
valid_axis=True,
force_int_axis=True,
),
)
@given(st.data())
def test_paddle_irfft2(
data,
dtype_x_axis,
frontend,
test_flags,
fn_tree,
on_device,
backend_fw,
):
input_dtype, x, axes = dtype_x_axis
for norm in ["backward", "forward", "ortho"]:
s_values = data.draw(s_strategy)
axes_values = data.draw(axes_strategy)
# Ensure s and axes are sequences of 2 integers
assert len(s_values) == 2
assert len(axes_values) == 2
# Convert s and axes to tuples as needed
s = tuple(s_values)
axes = tuple(axes_values)
helpers.test_frontend_function(
input_dtypes=input_dtype,
frontend=frontend,
backend_to_test=backend_fw,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
test_values=True,
x=x[0],
s=s,
axes=axes,
norm=norm,
)
@handle_frontend_test(
fn_tree="paddle.fft.irfftn",
dtype_x_axis=helpers.dtype_values_axis(
available_dtypes=helpers.get_dtypes("complex"),
min_value=-10,
max_value=10,
min_num_dims=1,
max_num_dims=5,
min_dim_size=2,
max_dim_size=5,
valid_axis=True,
force_int_axis=True,
),
norm=st.sampled_from(["backward", "ortho", "forward"]),
)
def test_paddle_irfftn(
dtype_x_axis,
norm,
frontend,
test_flags,
fn_tree,
backend_fw,
):
input_dtypes, x, axis = dtype_x_axis
helpers.test_frontend_function(
input_dtypes=input_dtypes,
backend_to_test=backend_fw,
frontend=frontend,
test_flags=test_flags,
fn_tree=fn_tree,
x=x[0],
s=None,
axes=None,
norm=norm,
)
# rfft
@handle_frontend_test(
fn_tree="paddle.fft.rfft",
dtype_input_axis=helpers.dtype_values_axis(
available_dtypes=helpers.get_dtypes("valid"),
min_num_dims=1,
min_dim_size=2,
shape=helpers.get_shape(
min_num_dims=1,
max_num_dims=2,
min_dim_size=2,
max_dim_size=4,
),
large_abs_safety_factor=12,
small_abs_safety_factor=12,
safety_factor_scale="log",
force_int_axis=True,
valid_axis=True,
allow_neg_axes=True,
),
norm=st.sampled_from(["backward", "ortho", "forward"]),
n=st.integers(min_value=2, max_value=10) | st.none(),
)
def test_paddle_rfft(
dtype_input_axis, norm, n, frontend, backend_fw, test_flags, fn_tree, on_device
):
input_dtype, x, axis = dtype_input_axis
helpers.test_frontend_function(
input_dtypes=input_dtype,
frontend=frontend,
backend_to_test=backend_fw,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
x=x[0],
n=n,
axis=axis,
norm=norm,
)
@handle_frontend_test(
fn_tree="paddle.fft.rfftfreq",
n=st.integers(min_value=1, max_value=1000),
sample_rate=st.integers(min_value=1, max_value=20),
)
def test_paddle_rfftfreq(
n, sample_rate, backend_fw, frontend, test_flags, fn_tree, on_device
):
d = 1 / sample_rate
helpers.test_frontend_function(
input_dtypes=[int],
frontend=frontend,
backend_to_test=backend_fw,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
test_values=True,
n=n,
d=d,
)
# Use the custom strategy for s and axes
axes_strategy = sequence_of_two_integers()
s_strategy = sequence_of_two_integers()
| ivy/ivy_tests/test_ivy/test_frontends/test_paddle/test_fft.py/0 | {
"file_path": "ivy/ivy_tests/test_ivy/test_frontends/test_paddle/test_fft.py",
"repo_id": "ivy",
"token_count": 7768
} | 61 |
# global
from hypothesis import strategies as st
# local
import ivy_tests.test_ivy.helpers as helpers
from ivy_tests.test_ivy.helpers import handle_frontend_test
# --- Helpers --- #
# --------------- #
@st.composite
def _multinomial_helper(draw):
input_dtype_and_x = draw(
helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("valid"),
shape=helpers.get_shape(min_num_dims=1, max_num_dims=2, min_dim_size=2),
)
)
num_samples = draw(st.integers(min_value=1, max_value=10))
if num_samples > 2:
replacement = True
else:
replacement = draw(st.booleans())
input_dtype, x = input_dtype_and_x
total = sum(x)
x = [arr / total for arr in x]
return input_dtype, x, num_samples, replacement
# --- Main --- #
# ------------ #
# multinomial
@handle_frontend_test(
fn_tree="paddle.tensor.random.multinomial",
input_dtype_and_x=_multinomial_helper(),
)
def test_paddle_multinomial(
input_dtype_and_x,
test_flags,
frontend,
backend_fw,
fn_tree,
on_device,
):
input_dtype, x, num_samples, replacement = input_dtype_and_x
helpers.test_frontend_function(
input_dtypes=input_dtype,
test_flags=test_flags,
frontend=frontend,
backend_to_test=backend_fw,
fn_tree=fn_tree,
on_device=on_device,
test_values=False,
x=x[0],
num_samples=num_samples,
replacement=replacement,
)
@handle_frontend_test(
fn_tree="paddle.normal",
input_dtypes=st.sampled_from([["float32"], ["float64"]]),
shape=helpers.get_shape(
min_num_dims=1,
min_dim_size=1,
),
mean=st.floats(
min_value=-10,
max_value=10,
),
std=st.floats(
min_value=0,
max_value=10,
),
)
def test_paddle_normal(
input_dtypes,
shape,
mean,
std,
frontend,
backend_fw,
test_flags,
fn_tree,
):
helpers.test_frontend_function(
input_dtypes=input_dtypes,
frontend=frontend,
backend_to_test=backend_fw,
test_flags=test_flags,
fn_tree=fn_tree,
test_values=False,
mean=mean,
std=std,
shape=shape,
)
@handle_frontend_test(
fn_tree="paddle.poisson",
dtype_and_x=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("float"),
min_value=0,
max_value=1000,
min_num_dims=1,
max_num_dims=1,
min_dim_size=2,
max_dim_size=2,
),
)
def test_paddle_poisson(dtype_and_x, backend_fw, frontend, test_flags, fn_tree):
dtype, x = dtype_and_x
helpers.test_frontend_function(
input_dtypes=dtype,
frontend=frontend,
backend_to_test=backend_fw,
test_flags=test_flags,
fn_tree=fn_tree,
test_values=False,
x=x[0],
)
@handle_frontend_test(
fn_tree="paddle.rand",
input_dtypes=st.sampled_from(["int32", "int64"]),
shape=helpers.get_shape(
allow_none=False,
min_num_dims=0,
min_dim_size=1,
),
dtype=helpers.get_dtypes("valid", full=False),
)
def test_paddle_rand(
*,
input_dtypes,
shape,
dtype,
frontend,
backend_fw,
test_flags,
fn_tree,
):
helpers.test_frontend_function(
input_dtypes=[input_dtypes],
frontend=frontend,
backend_to_test=backend_fw,
test_flags=test_flags,
fn_tree=fn_tree,
test_values=False,
shape=shape,
dtype=dtype[0],
)
# randint
@handle_frontend_test(
fn_tree="paddle.randint",
low=helpers.ints(min_value=0, max_value=10),
high=helpers.ints(min_value=11, max_value=20),
dtype=helpers.get_dtypes("integer"),
shape=helpers.get_shape(
allow_none=False, min_num_dims=2, max_num_dims=7, min_dim_size=2
),
)
def test_paddle_randint(
low,
high,
dtype,
backend_fw,
frontend,
test_flags,
shape,
fn_tree,
):
helpers.test_frontend_function(
input_dtypes=dtype,
backend_to_test=backend_fw,
frontend=frontend,
test_values=False,
fn_tree=fn_tree,
test_flags=test_flags,
low=low,
high=high,
shape=shape,
)
@handle_frontend_test(
fn_tree="paddle.randint_like",
input_dtype_and_x=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("valid"),
shape=helpers.get_shape(
allow_none=False, min_num_dims=2, max_num_dims=7, min_dim_size=2
),
),
low=st.integers(min_value=0, max_value=10),
high=st.integers(min_value=11, max_value=20),
dtype=helpers.get_dtypes("integer"),
)
def test_paddle_randint_like(
input_dtype_and_x,
low,
high,
dtype,
frontend,
backend_fw,
test_flags,
fn_tree,
on_device,
):
input_dtype, x = input_dtype_and_x
helpers.test_frontend_function(
input_dtypes=input_dtype,
frontend=frontend,
backend_to_test=backend_fw,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
test_values=False,
x=x[0],
low=low,
high=high,
dtype=dtype[0],
)
@handle_frontend_test(
fn_tree="paddle.randn",
input_dtypes=st.sampled_from(["int32", "int64"]),
shape=helpers.get_shape(
allow_none=False, min_num_dims=1, max_num_dims=1, min_dim_size=2
),
dtype=st.sampled_from(["float32", "float64"]),
)
def test_paddle_randn(
*,
input_dtypes,
shape,
dtype,
frontend,
backend_fw,
test_flags,
fn_tree,
):
helpers.test_frontend_function(
input_dtypes=[input_dtypes],
frontend=frontend,
backend_to_test=backend_fw,
test_flags=test_flags,
fn_tree=fn_tree,
test_values=False,
shape=shape,
dtype=dtype,
)
@handle_frontend_test(
fn_tree="paddle.standard_normal",
input_dtypes=st.sampled_from([["int32"], ["int64"]]),
shape=helpers.get_shape(
min_num_dims=1,
min_dim_size=1,
),
dtype=helpers.get_dtypes("valid", full=False),
)
def test_paddle_standard_normal(
input_dtypes,
shape,
dtype,
frontend,
backend_fw,
test_flags,
fn_tree,
):
helpers.test_frontend_function(
input_dtypes=input_dtypes,
frontend=frontend,
backend_to_test=backend_fw,
test_flags=test_flags,
fn_tree=fn_tree,
test_values=False,
shape=shape,
dtype=dtype[0],
)
@handle_frontend_test(
fn_tree="paddle.uniform",
input_dtypes=helpers.get_dtypes("float"),
shape=st.tuples(
st.integers(min_value=2, max_value=5), st.integers(min_value=2, max_value=5)
),
dtype=helpers.get_dtypes("valid", full=False),
min=st.floats(allow_nan=False, allow_infinity=False, width=32),
max=st.floats(allow_nan=False, allow_infinity=False, width=32),
seed=st.integers(min_value=2, max_value=5),
)
def test_paddle_uniform(
input_dtypes,
shape,
dtype,
min,
max,
seed,
frontend,
backend_fw,
test_flags,
fn_tree,
):
helpers.test_frontend_function(
input_dtypes=input_dtypes,
frontend=frontend,
backend_to_test=backend_fw,
test_flags=test_flags,
fn_tree=fn_tree,
test_values=False,
shape=shape,
dtype=dtype[0],
min=min,
max=max,
seed=seed,
)
| ivy/ivy_tests/test_ivy/test_frontends/test_paddle/test_random.py/0 | {
"file_path": "ivy/ivy_tests/test_ivy/test_frontends/test_paddle/test_random.py",
"repo_id": "ivy",
"token_count": 3788
} | 62 |
import pytest
@pytest.fixture(scope="session")
def frontend():
return "scipy"
| ivy/ivy_tests/test_ivy/test_frontends/test_scipy/conftest.py/0 | {
"file_path": "ivy/ivy_tests/test_ivy/test_frontends/test_scipy/conftest.py",
"repo_id": "ivy",
"token_count": 32
} | 63 |
import ivy_tests.test_ivy.helpers as helpers
from ivy_tests.test_ivy.helpers import handle_frontend_method
CLASS_TREE = "ivy.functional.frontends.sklearn.preprocessing"
@handle_frontend_method(
class_tree=CLASS_TREE + ".LabelEncoder",
init_tree="sklearn.preprocessing.LabelEncoder",
method_name="fit",
dtype_x=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("float_and_integer"),
min_num_dims=1,
max_num_dims=1,
),
)
def test_sklearn_label_encoder_fit(
dtype_x,
frontend,
frontend_method_data,
init_flags,
method_flags,
on_device,
backend_fw,
):
input_dtype, x = dtype_x
helpers.test_frontend_method(
init_input_dtypes=input_dtype,
init_all_as_kwargs_np={},
method_input_dtypes=input_dtype,
method_all_as_kwargs_np={
"y": x[0],
},
frontend_method_data=frontend_method_data,
init_flags=init_flags,
method_flags=method_flags,
frontend=frontend,
on_device=on_device,
backend_to_test=backend_fw,
)
| ivy/ivy_tests/test_ivy/test_frontends/test_sklearn/test_preprocessing/test_label.py/0 | {
"file_path": "ivy/ivy_tests/test_ivy/test_frontends/test_sklearn/test_preprocessing/test_label.py",
"repo_id": "ivy",
"token_count": 520
} | 64 |
# global
from hypothesis import strategies as st, assume
# local
import ivy_tests.test_ivy.helpers as helpers
from ivy_tests.test_ivy.helpers import handle_frontend_test
from ivy_tests.test_ivy.test_functional.test_experimental.test_nn.test_layers import (
_interp_args,
)
# --- Helpers --- #
# --------------- #
@st.composite
def _extract_patches_helper(draw):
sizes = [
1,
draw(st.integers(min_value=1, max_value=5)),
draw(st.integers(min_value=1, max_value=5)),
1,
]
rates = [
1,
draw(st.integers(min_value=1, max_value=5)),
draw(st.integers(min_value=1, max_value=5)),
1,
]
x_dim = []
for i in range(1, 3):
min_x = sizes[i] + (sizes[i] - 1) * (rates[i] - 1)
x_dim.append(draw(st.integers(min_x, min_x + 5)))
x_shape = [
draw(st.integers(min_value=1, max_value=5)),
*x_dim,
draw(st.integers(min_value=1, max_value=5)),
]
dtype_x = draw(
helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("numeric"),
shape=x_shape,
)
)
strides = [
1,
draw(st.integers(min_value=1, max_value=5)),
draw(st.integers(min_value=1, max_value=5)),
1,
]
padding = draw(st.sampled_from(["VALID", "SAME"]))
return dtype_x, sizes, strides, rates, padding
# --- Main --- #
# ------------ #
# extract_patches
@handle_frontend_test(
fn_tree="tensorflow.image.extract_patches",
dtype_values_and_other=_extract_patches_helper(),
test_with_out=st.just(False),
)
def test_tensorflow_extract_patches(
*,
dtype_values_and_other,
frontend,
test_flags,
fn_tree,
backend_fw,
on_device,
):
(x_dtype, x), sizes, strides, rates, padding = dtype_values_and_other
helpers.test_frontend_function(
input_dtypes=x_dtype,
backend_to_test=backend_fw,
frontend=frontend,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
images=x[0],
sizes=sizes,
strides=strides,
rates=rates,
padding=padding,
)
@handle_frontend_test(
fn_tree="tensorflow.image.resize",
dtype_x_mode=_interp_args(
mode_list=[
"bilinear",
"nearest",
"area",
"bicubic",
"lanczos3",
"lanczos5",
"mitchellcubic",
"gaussian",
]
),
antialias=st.booleans(),
test_with_out=st.just(False),
)
def test_tensorflow_resize(
dtype_x_mode,
antialias,
frontend,
test_flags,
fn_tree,
backend_fw,
on_device,
):
input_dtype, x, mode, size, _, _, preserve = dtype_x_mode
try:
helpers.test_frontend_function(
input_dtypes=input_dtype,
backend_to_test=backend_fw,
frontend=frontend,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
rtol=1e-01,
atol=1e-01,
image=x[0],
size=size,
method=mode,
antialias=antialias,
preserve_aspect_ratio=preserve,
)
except Exception as e:
if hasattr(e, "message") and (
"output dimensions must be positive" in e.message
or "Input and output sizes should be greater than 0" in e.message
):
assume(False)
raise e
| ivy/ivy_tests/test_ivy/test_frontends/test_tensorflow/test_image/test_cropping.py/0 | {
"file_path": "ivy/ivy_tests/test_ivy/test_frontends/test_tensorflow/test_image/test_cropping.py",
"repo_id": "ivy",
"token_count": 1799
} | 65 |
from hypothesis import strategies as st
import ivy_tests.test_ivy.helpers as helpers
from ivy_tests.test_ivy.helpers import handle_frontend_test
@handle_frontend_test(
fn_tree="torch.bartlett_window",
window_length=helpers.ints(min_value=2, max_value=100),
periodic=st.booleans(),
dtype=helpers.get_dtypes("float", full=False),
)
def test_torch_bartlett_window(
window_length,
periodic,
dtype,
on_device,
fn_tree,
frontend,
test_flags,
backend_fw,
):
helpers.test_frontend_function(
input_dtypes=[],
backend_to_test=backend_fw,
frontend=frontend,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
window_length=window_length,
periodic=periodic,
dtype=dtype[0],
rtol=1e-02,
atol=1e-02,
)
@handle_frontend_test(
window_length=helpers.ints(min_value=1, max_value=100),
dtype=helpers.get_dtypes("float", full=False),
fn_tree="torch.blackman_window",
periodic=st.booleans(),
)
def test_torch_blackman_window(
*,
window_length,
dtype,
periodic,
on_device,
fn_tree,
frontend,
backend_fw,
test_flags,
):
helpers.test_frontend_function(
input_dtypes=[],
on_device=on_device,
frontend=frontend,
backend_to_test=backend_fw,
test_flags=test_flags,
fn_tree=fn_tree,
window_length=window_length,
periodic=periodic,
dtype=dtype[0],
rtol=1e-02,
atol=1e-02,
)
# hamming_window
@handle_frontend_test(
fn_tree="torch.hamming_window",
dtype_and_window_length=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("integer"),
max_num_dims=0,
min_value=1,
max_value=20,
),
periodic=st.booleans(),
dtype_and_coefficients=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("float"),
max_num_dims=0,
num_arrays=2,
min_value=0,
max_value=5,
),
dtype=helpers.get_dtypes("float"),
test_with_out=st.just(False),
)
def test_torch_hamming_window(
dtype_and_window_length,
periodic,
dtype_and_coefficients,
*,
dtype,
fn_tree,
frontend,
test_flags,
backend_fw,
):
window_length_dtype, window_length = dtype_and_window_length
coefficients_dtypes, coefficients = dtype_and_coefficients
helpers.test_frontend_function(
input_dtypes=window_length_dtype + coefficients_dtypes,
window_length=int(window_length[0]),
periodic=periodic,
alpha=float(coefficients[0]),
beta=float(coefficients[1]),
dtype=dtype[0],
fn_tree=fn_tree,
frontend=frontend,
test_flags=test_flags,
backend_to_test=backend_fw,
rtol=1e-1,
atol=1e-1,
)
@handle_frontend_test(
window_length=helpers.ints(min_value=1, max_value=100),
dtype=helpers.get_dtypes("float", full=False),
fn_tree="torch.kaiser_window",
periodic=st.booleans(),
beta=helpers.floats(min_value=1, max_value=20),
)
def test_torch_kaiser_window(
*,
window_length,
dtype,
periodic,
beta,
on_device,
fn_tree,
frontend,
backend_fw,
test_flags,
):
helpers.test_frontend_function(
input_dtypes=[],
on_device=on_device,
frontend=frontend,
backend_to_test=backend_fw,
test_flags=test_flags,
fn_tree=fn_tree,
window_length=window_length,
periodic=periodic,
beta=beta,
dtype=dtype[0],
rtol=1e-02,
atol=1e-02,
)
| ivy/ivy_tests/test_ivy/test_frontends/test_torch/test_spectral_ops.py/0 | {
"file_path": "ivy/ivy_tests/test_ivy/test_frontends/test_torch/test_spectral_ops.py",
"repo_id": "ivy",
"token_count": 1790
} | 66 |
"""Collection of tests for unified general functions."""
# global
import time
import math
from types import SimpleNamespace
import pytest
from hypothesis import given, assume, strategies as st
import numpy as np
from collections.abc import Sequence
# local
import threading
import ivy
import ivy_tests.test_ivy.helpers as helpers
from ivy_tests.test_ivy.helpers import (
handle_test,
BackendHandler,
handle_example,
)
from ivy_tests.test_ivy.helpers.assertions import assert_all_close
from ivy_tests.test_ivy.test_functional.test_core.test_elementwise import pow_helper
try:
import tensorflow as tf
except ImportError:
tf = SimpleNamespace()
tf.__version__ = None
try:
import jax.numpy as jnp
except ImportError:
jnp = SimpleNamespace()
try:
import torch.multiprocessing as multiprocessing
except ImportError:
multiprocessing = SimpleNamespace()
try:
import ivy.functional.backends.jax
except ImportError:
ivy.functional.backends.jax = SimpleNamespace()
try:
import ivy.functional.backends.tensorflow
except ImportError:
ivy.functional.backends.tensorflow = SimpleNamespace()
try:
import ivy.functional.backends.torch
except ImportError:
ivy.functional.backends.torch = SimpleNamespace()
# --- Helpers --- #
# --------------- #
def _composition_1():
return ivy.relu().argmax()
def _composition_2():
return ivy.ceil() or ivy.linspace()
def _fn1(x, y):
return ivy.matmul(x, y)
def _fn2(x, y):
return ivy.vecdot(x, y)
def _fn3(x, y):
ivy.add(x, y)
def _get_shape_of_list(lst, shape=()):
if not lst:
return []
if not isinstance(lst, Sequence):
return shape
if isinstance(lst[0], Sequence):
length = len(lst[0])
if not all(len(item) == length for item in lst):
msg = "not all lists have the same length"
raise ValueError(msg)
shape += (len(lst),)
shape = _get_shape_of_list(lst[0], shape)
return shape
@st.composite # ToDo remove when helpers.get_dtypes supports it
def _get_valid_numeric_no_unsigned(draw):
return list(
set(draw(helpers.get_dtypes("numeric"))).difference(
draw(helpers.get_dtypes("unsigned"))
)
)
@st.composite
def _isin_data_generation_helper(draw):
assume_unique = draw(st.booleans())
if assume_unique:
dtype_and_x = helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("valid"),
num_arrays=2,
shared_dtype=True,
).filter(lambda x: np.array_equal(x[1][0], np.unique(x[1][0])))
else:
dtype_and_x = helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("valid"),
num_arrays=2,
shared_dtype=True,
)
return assume_unique, draw(dtype_and_x)
def _supports_inplace_update(ivy_backend, test_flags) -> bool:
supports_array_inplace_update = (
not test_flags.as_variable and ivy_backend.inplace_arrays_supported()
)
supports_variable_inplace_update = (
test_flags.as_variable and ivy_backend.inplace_variables_supported()
)
return supports_array_inplace_update or supports_variable_inplace_update
# fourier_encode
# @given(
# x=helpers.dtype_and_values(ivy_np.valid_float_dtypes, min_num_dims=1),
# max_freq=helpers.dtype_and_values(ivy_np.valid_float_dtypes),
# num_bands=st.integers(min_value=1,max_value=100000),
# as_variable=st.booleans(),
# num_positional_args=st.integers(0, 3),
# native_array=st.booleans(),
# container=st.booleans(),
# instance_method=st.booleans(),
# )
# def test_fourier_encode(
# x,
# max_freq,
# num_bands,
# as_variable,
# num_positional_args,
# native_array,
# container,
# instance_method,
# device,
# call,
# fw
# ):
# # smoke test
# dtype_x, x = x
# dtype_max_freq, max_freq = max_freq
# if fw == "torch" and dtype_x in ["uint16", "uint32", "uint64"]:
# return
# helpers.test_function(
# dtype_x,
# as_variable,
# False,
# num_positional_args,
# native_array,
# container,
# instance_method,
# fw,
# "fourier_encode",
# x=np.asarray(x, dtype=dtype_x),
# max_freq=np.asarray(max_freq,dtype=dtype_max_freq),
# num_bands=num_bands
# )
@st.composite
def _values_and_ndindices(
draw,
*,
array_dtypes,
indices_dtypes=helpers.get_dtypes("integer"),
allow_inf=False,
x_min_value=None,
x_max_value=None,
min_num_dims=2,
max_num_dims=5,
min_dim_size=1,
max_dim_size=10,
):
x_dtype, x, x_shape = draw(
helpers.dtype_and_values(
available_dtypes=array_dtypes,
allow_inf=allow_inf,
ret_shape=True,
min_value=x_min_value,
max_value=x_max_value,
min_num_dims=min_num_dims,
max_num_dims=max_num_dims,
min_dim_size=min_dim_size,
max_dim_size=max_dim_size,
)
)
x_dtype = x_dtype[0] if isinstance(x_dtype, (list)) else x_dtype
x = x[0] if isinstance(x, (list)) else x
# indices_dims defines how far into the array to index.
indices_dims = draw(
helpers.ints(
min_value=1,
max_value=len(x_shape) - 1,
)
)
# num_ndindices defines the number of elements to generate.
num_ndindices = draw(
helpers.ints(
min_value=1,
max_value=x_shape[indices_dims],
)
)
# updates_dims defines how far into the array to index.
updates_dtype, updates = draw(
helpers.dtype_and_values(
available_dtypes=array_dtypes,
allow_inf=allow_inf,
shape=x_shape[indices_dims:],
num_arrays=num_ndindices,
shared_dtype=True,
)
)
updates_dtype = (
updates_dtype[0] if isinstance(updates_dtype, list) else updates_dtype
)
updates = updates[0] if isinstance(updates, list) else updates
indices = []
indices_dtype = draw(st.sampled_from(indices_dtypes))
for _ in range(num_ndindices):
nd_index = []
for j in range(indices_dims):
axis_index = draw(
helpers.ints(
min_value=0,
max_value=max(0, x_shape[j] - 1),
)
)
nd_index.append(axis_index)
indices.append(nd_index)
indices = np.array(indices)
return [x_dtype, indices_dtype, updates_dtype], x, indices, updates
@st.composite
def _vector_norm_helper(draw):
dtype, x = draw(
helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("float", key="clip_vector_norm"),
min_num_dims=1,
min_value=-100,
max_value=100,
abs_smallest_val=1e-2,
safety_factor_scale="log",
)
)
if ivy.is_int_dtype(dtype[0]):
max_val = ivy.iinfo(dtype[0]).max
else:
max_val = ivy.finfo(dtype[0]).max
max_x = np.abs(x[0]).max()
if max_x > 1:
max_p = math.log(max_val) / math.log(max_x)
else:
max_p = math.log(max_val)
p = draw(helpers.floats(abs_smallest_val=1e-2, min_value=-max_p, max_value=max_p))
max_norm_val = math.log(max_val / max_x)
max_norm = draw(
helpers.floats(
large_abs_safety_factor=4,
safety_factor_scale="log",
min_value=1e-2,
max_value=max_norm_val,
)
)
return dtype, x, max_norm, p
@st.composite
def array_and_ndindices_batch_dims(
draw,
*,
array_dtypes,
indices_dtypes=helpers.get_dtypes("integer"),
allow_inf=False,
min_num_dims=1,
max_num_dims=5,
min_dim_size=1,
max_dim_size=10,
):
x_dtype, x, x_shape = draw(
helpers.dtype_and_values(
available_dtypes=array_dtypes,
allow_inf=allow_inf,
ret_shape=True,
min_num_dims=min_num_dims,
max_num_dims=max_num_dims,
min_dim_size=min_dim_size,
max_dim_size=max_dim_size,
)
)
batch_dims = draw(
helpers.ints(
min_value=0,
max_value=len(x_shape) - 1,
)
)
# indices_dims defines how far into the array to index.
indices_dims = draw(
helpers.ints(
min_value=1,
max_value=max(1, len(x_shape) - batch_dims),
)
)
batch_shape = x_shape[0:batch_dims]
shape_var = draw(
helpers.get_shape(
allow_none=False,
min_num_dims=min_num_dims,
max_num_dims=max_num_dims - batch_dims,
min_dim_size=min_dim_size,
max_dim_size=max_dim_size,
)
)
ndindices_shape = list(batch_shape) + list(shape_var) + [indices_dims]
ndindices = np.zeros(ndindices_shape, dtype="int32")
if len(ndindices_shape) <= 1:
enumerator = ndindices
else:
enumerator = np.zeros(ndindices_shape[0:-1], dtype="int32")
ndindices_dtype = draw(st.sampled_from(indices_dtypes))
for idx, _ in np.ndenumerate(enumerator):
bounds = []
for j in range(0, indices_dims):
bounds.append(x_shape[j + batch_dims] - 1)
ndindices[idx] = draw(ndindices_with_bounds(bounds=bounds))
ndindices = np.asarray(ndindices, ndindices_dtype)
return [x_dtype[0], ndindices_dtype], x[0], ndindices, batch_dims
@st.composite
def ndindices_with_bounds(
draw,
*,
bounds,
):
arr = []
for i in bounds:
x = draw(
helpers.ints(
min_value=0,
max_value=max(0, i),
)
)
arr.append(x)
return arr
# --- Main --- #
# ------------ #
# all_equal
@handle_test(
fn_tree="functional.ivy.all_equal",
dtypes_and_xs=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("valid"),
num_arrays=helpers.ints(min_value=2, max_value=10),
min_num_dims=1,
),
equality_matrix=st.booleans(),
test_with_out=st.just(False),
test_gradients=st.just(False),
)
def test_all_equal(
dtypes_and_xs, equality_matrix, test_flags, backend_fw, fn_name, on_device
):
dtypes, arrays = dtypes_and_xs
kw = {}
i = 0
for x_ in arrays:
kw[f"x{i}"] = x_
i += 1
test_flags.num_positional_args = len(arrays)
helpers.test_function(
input_dtypes=dtypes,
test_flags=test_flags,
on_device=on_device,
backend_to_test=backend_fw,
fn_name=fn_name,
**kw,
equality_matrix=equality_matrix,
)
def test_arg_info():
return
@given(
x_n_value=st.sampled_from(
[
[ivy.value_is_nan, ["x", "include_infs"]],
[ivy.clip_matrix_norm, ["x", "max_norm", "p", "out"]],
]
)
)
def test_arg_names(x_n_value):
x, value = x_n_value
ret = ivy.arg_names(x)
assert ret == value
# array_equal
@handle_test(
fn_tree="functional.ivy.array_equal",
dtypes_and_xs=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("valid"),
num_arrays=2,
),
test_with_out=st.just(False),
test_gradients=st.just(False),
)
def test_array_equal(dtypes_and_xs, test_flags, backend_fw, fn_name, on_device):
dtypes, arrays = dtypes_and_xs
helpers.test_function(
input_dtypes=dtypes,
test_flags=test_flags,
on_device=on_device,
backend_to_test=backend_fw,
fn_name=fn_name,
x0=arrays[0],
x1=arrays[1],
)
@handle_test(
fn_tree="functional.ivy.assert_supports_inplace",
x_val_and_dtypes=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("valid")
),
ground_truth_backend="numpy",
test_with_out=st.just(False),
test_gradients=st.just(False),
)
def test_assert_supports_inplace(
x_val_and_dtypes, test_flags, backend_fw, fn_name, on_device
):
dtype, x = x_val_and_dtypes
if backend_fw in ["tensorflow", "jax", "paddle"]:
return
assume("bfloat16" not in dtype)
helpers.test_function(
input_dtypes=dtype,
test_flags=test_flags,
on_device=on_device,
backend_to_test=backend_fw,
fn_name=fn_name,
x=x[0],
)
def test_cache_fn():
def func():
return ivy.random_uniform()
# return a single cached_fn and then query this
cached_fn = ivy.cache_fn(func)
ret0 = cached_fn()
ret0_again = cached_fn()
ret1 = func()
assert ivy.to_numpy(ret0).item() == ivy.to_numpy(ret0_again).item()
assert ivy.to_numpy(ret0).item() != ivy.to_numpy(ret1).item()
assert ret0 is ret0_again
assert ret0 is not ret1
# call ivy.cache_fn repeatedly, the new cached functions
# each use the same global dict
ret0 = ivy.cache_fn(func)()
ret0_again = ivy.cache_fn(func)()
ret1 = func()
assert ivy.to_numpy(ret0).item() == ivy.to_numpy(ret0_again).item()
assert ivy.to_numpy(ret0).item() != ivy.to_numpy(ret1).item()
assert ret0 is ret0_again
assert ret0 is not ret1
def test_cache_fn_with_args():
def func(_):
return ivy.random_uniform()
# return a single cached_fn and then query this
cached_fn = ivy.cache_fn(func)
ret0 = cached_fn(0)
ret0_again = cached_fn(0)
ret1 = cached_fn(1)
assert ivy.to_numpy(ret0).item() == ivy.to_numpy(ret0_again).item()
assert ivy.to_numpy(ret0).item() != ivy.to_numpy(ret1).item()
assert ret0 is ret0_again
assert ret0 is not ret1
# call ivy.cache_fn repeatedly, the new cached functions
# each use the same global dict
ret0 = ivy.cache_fn(func)(0)
ret0_again = ivy.cache_fn(func)(0)
ret1 = ivy.cache_fn(func)(1)
assert ivy.to_numpy(ret0).item() == ivy.to_numpy(ret0_again).item()
assert ivy.to_numpy(ret0).item() != ivy.to_numpy(ret1).item()
assert ret0 is ret0_again
assert ret0 is not ret1
# clip_matrix_norm
@handle_test(
fn_tree="functional.ivy.clip_matrix_norm",
dtype_x=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("float"),
min_num_dims=2,
max_num_dims=5,
min_dim_size=1,
max_dim_size=5,
min_value=-10,
max_value=10,
abs_smallest_val=1e-4,
),
max_norm=st.floats(min_value=0.137, max_value=1e05),
p=st.sampled_from([1, 2, float("inf"), "fro", "nuc"]),
)
def test_clip_matrix_norm(
dtype_x, max_norm, p, test_flags, backend_fw, fn_name, on_device
):
dtype, x = dtype_x
helpers.test_function(
input_dtypes=dtype,
test_flags=test_flags,
on_device=on_device,
backend_to_test=backend_fw,
fn_name=fn_name,
rtol_=1e-2,
atol_=1e-2,
x=x[0],
max_norm=max_norm,
p=p,
)
# clip_vector_norm
@handle_test(
fn_tree="functional.ivy.clip_vector_norm",
dtype_x_max_norm_p=_vector_norm_helper(),
)
def test_clip_vector_norm(
*, dtype_x_max_norm_p, test_flags, backend_fw, fn_name, on_device
):
dtype, x, max_norm, p = dtype_x_max_norm_p
helpers.test_function(
input_dtypes=dtype,
test_flags=test_flags,
on_device=on_device,
backend_to_test=backend_fw,
fn_name=fn_name,
rtol_=1e-1,
atol_=1e-1,
x=x[0],
max_norm=max_norm,
p=p,
)
# container types
def test_container_types():
cont_types = ivy.container_types()
assert isinstance(cont_types, list)
for cont_type in cont_types:
assert hasattr(cont_type, "keys")
assert hasattr(cont_type, "values")
assert hasattr(cont_type, "items")
# Still to Add #
# ---------------#
@given(fw=st.sampled_from(["torch", "tensorflow", "numpy", "jax"]))
def test_current_backend_str(fw):
ivy.set_backend(fw)
assert ivy.current_backend_str() == fw
ivy.previous_backend()
# default
@handle_test(
fn_tree="functional.ivy.default",
x=st.one_of(
st.none(),
helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("numeric"),
allow_inf=False,
min_num_dims=0,
min_dim_size=2,
),
st.sampled_from([lambda *args, **kwargs: None]),
),
default_val=st.one_of(
helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("numeric"),
allow_inf=False,
min_num_dims=0,
min_dim_size=2,
),
st.sampled_from([lambda *args, **kwargs: None]),
),
)
def test_default(x, default_val, test_flags, backend_fw):
with BackendHandler.update_backend(backend_fw) as ivy_backend:
with_callable = False
if x is not None:
if callable(x):
with_callable = True
else:
x_dtype, x = x
x = x[0].tolist() if isinstance(x, list) else x
else:
if callable(default_val):
with_callable = True
else:
dv_dtype, default_val = default_val
default_val = (
default_val[0].tolist()
if isinstance(default_val, list)
else default_val
)
truth_val = ivy_backend.to_native(x if x is not None else default_val)
if with_callable:
assert ivy_backend.default(x, default_val) == truth_val
else:
assert_all_close(
np.asarray(ivy_backend.default(x, default_val)),
np.asarray(truth_val),
rtol=1e-3,
atol=1e-3,
backend=backend_fw,
ground_truth_backend=test_flags.ground_truth_backend,
)
# ToDo: re-add this test once ivy.get_backend is working correctly, with the returned
# ivy handle having no dependence on the globally set ivy
# @handle_cmd_line_args
#
# def test_class_ivy_handles(device, call):
#
# if call is helpers.np_call:
# # Numpy is the conflicting framework being tested against
# pytest.skip()
#
# class ArrayGen:
# def __init__(self, ivyh):
# self._ivy = ivyh
#
# def get_array(self):
# return self._ivy.array([0.0, 1.0, 2.0], dtype="float32", device=device)
#
# # create instance
# ag = ArrayGen(ivy.get_backend())
#
# # create array from array generator
# x = ag.get_array()
#
# # verify this is not a numpy array
# assert not isinstance(x, np.ndarray)
#
# # change global framework to numpy
# ivy.set_backend("numpy")
#
# # create another array from array generator
# x = ag.get_array()
#
# # verify this is not still a numpy array
# assert not isinstance(x, np.ndarray)
# einops_rearrange
@handle_test(
fn_tree="functional.ivy.einops_rearrange",
dtype_x=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("numeric"),
allow_inf=False,
min_num_dims=4,
max_num_dims=4,
min_dim_size=2,
max_dim_size=2,
min_value=-1e05,
max_value=1e05,
).filter(
lambda x: (ivy.array([x[1][0]], dtype="float32").shape[2] % 2 == 0)
and (ivy.array([x[1][0]], dtype="float32").shape[3] % 2 == 0)
and (x[0][0] not in ["float16", "bfloat16"])
),
pattern_and_axes_lengths=st.sampled_from(
[
("b h w c -> b h w c", {}),
("b h w c -> (b h) w c", {}),
("b h w c -> b c h w", {}),
("b h w c -> h (b w) c", {}),
("b h w c -> b (c h w)", {}),
("b (h1 h) (w1 w) c -> (b h1 w1) h w c", {"h1": 2, "w1": 2}),
("b (h h1) (w w1) c -> b h w (c h1 w1)", {"h1": 2, "w1": 2}),
]
),
)
def test_einops_rearrange(
dtype_x, pattern_and_axes_lengths, test_flags, backend_fw, fn_name, on_device
):
pattern, axes_lengths = pattern_and_axes_lengths
dtype, x = dtype_x
helpers.test_function(
input_dtypes=dtype,
test_flags=test_flags,
on_device=on_device,
backend_to_test=backend_fw,
fn_name=fn_name,
x=x[0],
pattern=pattern,
**axes_lengths,
)
# einops_reduce
@handle_test(
fn_tree="functional.ivy.einops_reduce",
dtype_x=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("numeric"),
allow_inf=False,
min_num_dims=4,
max_num_dims=4,
min_dim_size=2,
max_dim_size=2,
min_value=-1e05,
max_value=1e05,
).filter(
lambda x: (ivy.array([x[1][0]], dtype="float32").shape[2] % 2 == 0)
and (ivy.array([x[1][0]], dtype="float32").shape[3] % 2 == 0)
and (x[0][0] not in ["float16", "bfloat16"])
),
pattern_and_axes_lengths=st.sampled_from(
[
("b c (h1 h2) (w1 w2) -> b c h1 w1", {"h2": 2, "w2": 2}),
]
),
floattypes=helpers.get_dtypes("float"),
reduction=st.sampled_from(["min", "max", "sum", "mean", "prod"]),
)
def test_einops_reduce(
*,
dtype_x,
pattern_and_axes_lengths,
floattypes,
reduction,
test_flags,
backend_fw,
fn_name,
on_device,
):
pattern, axes_lengths = pattern_and_axes_lengths
dtype, x = dtype_x
if (reduction in ["mean", "prod"]) and (dtype not in floattypes):
dtype = ["float32"]
# torch computes min and max differently and leads to inconsistent gradients
if backend_fw == "torch" and reduction in ["min", "max"]:
test_flags.test_gradients = False
helpers.test_function(
input_dtypes=dtype,
test_flags=test_flags,
on_device=on_device,
backend_to_test=backend_fw,
fn_name=fn_name,
rtol_=1e-1,
atol_=1e-1,
x=x[0],
pattern=pattern,
reduction=reduction,
**axes_lengths,
)
# einops_repeat
@handle_test(
fn_tree="functional.ivy.einops_repeat",
dtype_x=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("numeric"),
allow_inf=False,
min_num_dims=2,
max_num_dims=2,
min_dim_size=2,
),
pattern_and_axes_lengths=st.sampled_from(
[
("h w -> h w repeat", {"repeat": 2}),
("h w -> (repeat h) w", {"repeat": 2}),
("h w -> h (repeat w)", {"repeat": 2}),
("h w -> (h h2) (w w2)", {"h2": 2, "w2": 2}),
("h w -> w h", {}),
]
),
)
def test_einops_repeat(
*, dtype_x, pattern_and_axes_lengths, test_flags, backend_fw, fn_name, on_device
):
pattern, axes_lengths = pattern_and_axes_lengths
dtype, x = dtype_x
assume("uint16" not in dtype)
helpers.test_function(
input_dtypes=dtype,
test_flags=test_flags,
on_device=on_device,
backend_to_test=backend_fw,
fn_name=fn_name,
x=x[0],
pattern=pattern,
**axes_lengths,
)
# exists
@handle_test(
fn_tree="functional.ivy.exists",
x=st.one_of(
st.none(),
helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("numeric"),
allow_inf=False,
min_num_dims=0,
min_dim_size=1,
),
st.sampled_from([ivy.array]),
),
)
def test_exists(x):
if x is not None:
if not callable(x):
dtype, x = x
ret = ivy.exists(x)
assert isinstance(ret, bool)
y_true = x is not None
assert ret == y_true
def test_explicit_ivy_framework_handles(backend_fw):
if backend_fw == "numpy":
# Numpy is the conflicting framework being tested against
pytest.skip()
# set with explicit handle caught
ivy_exp = ivy.with_backend(backend_fw)
assert ivy_exp.current_backend_str() == backend_fw
# assert backend implemented function is accessible
assert "array" in ivy_exp.__dict__
assert callable(ivy_exp.array)
# assert joint implemented function is also accessible
assert "cache_fn" in ivy_exp.__dict__
assert callable(ivy_exp.cache_fn)
# set global ivy to numpy
ivy.set_backend("numpy")
# assert the explicit handle is still unchanged
assert ivy.current_backend_str() == "numpy"
assert ivy_exp.current_backend_str() == backend_fw
# unset global ivy from numpy
ivy.previous_backend()
def test_framework_setting_with_multiprocessing(backend_fw):
if backend_fw == "numpy":
# Numpy is the conflicting framework being tested against
pytest.skip()
def worker_fn(out_queue):
ivy.set_backend("numpy")
x_ = np.array([0.0, 1.0, 2.0])
for _ in range(1000):
try:
ivy.mean(x_)
except TypeError:
out_queue.put(False)
return
ivy.previous_backend()
out_queue.put(True)
# get original framework string and array
ivy.set_backend(backend_fw)
x = ivy.array([0.0, 1.0, 2.0])
# start numpy loop thread
output_queue = multiprocessing.Queue()
worker = multiprocessing.Process(target=worker_fn, args=(output_queue,))
worker.start()
# start local original framework loop
for _ in range(1000):
ivy.mean(x)
ivy.previous_backend()
worker.join()
assert output_queue.get_nowait()
def test_framework_setting_with_threading(backend_fw):
if backend_fw == "jax":
# Numpy is the conflicting framework being tested against
pytest.skip()
def thread_fn():
x_ = jnp.array([0.0, 1.0, 2.0])
ivy.set_backend("jax")
for _ in range(2000):
try:
ivy.mean(x_)
except TypeError:
return False
ivy.previous_backend()
return True
# start jax loop thread
thread = threading.Thread(target=thread_fn)
thread.start()
time.sleep(0.01)
ivy.set_backend(backend_fw)
x = ivy.array([0.0, 1.0, 2.0])
# start local original framework loop
for _ in range(2000):
ivy.mean(x)
ivy.previous_backend()
assert not thread.join()
# function_supported_devices_and_dtypes
@pytest.mark.parametrize(
"func",
[_composition_1, _composition_2],
)
def test_function_supported_device_and_dtype(func, backend_fw):
with BackendHandler.update_backend(backend_fw) as ivy_backend:
res = ivy_backend.function_supported_devices_and_dtypes(func, recurse=True)
exp = {"cpu": func.test_unsupported_devices_and_dtypes.copy()["cpu"]}
for dev in exp:
exp[dev] = tuple(
set(ivy.valid_dtypes).difference(exp[dev][ivy.current_backend_str()])
)
all_key = set(res.keys()).union(set(exp.keys()))
for key in all_key:
assert key in res
assert key in exp
assert set(res[key]) == set(exp[key])
# function_unsupported_devices_and_dtypes
@pytest.mark.parametrize(
"func",
[_composition_1, _composition_2],
)
def test_function_unsupported_devices(func, backend_fw):
with BackendHandler.update_backend(backend_fw) as ivy_backend:
res = ivy_backend.function_unsupported_devices_and_dtypes(func)
exp = func.test_unsupported_devices_and_dtypes.copy()
for dev in exp:
exp[dev] = exp[dev][backend_fw]
devs = list(exp.keys())
for dev in devs:
if len(exp[dev]) == 0:
exp.pop(dev)
all_key = set(res.keys()).union(set(exp.keys()))
for key in all_key:
assert key in res
assert key in exp
assert set(res[key]) == set(exp[key])
# gather
@handle_test(
fn_tree="functional.ivy.gather",
params_indices_others=helpers.array_indices_axis(
array_dtypes=helpers.get_dtypes("numeric"),
indices_dtypes=["int32", "int64"],
min_num_dims=1,
max_num_dims=5,
min_dim_size=1,
max_dim_size=10,
),
)
def test_gather(params_indices_others, test_flags, backend_fw, fn_name, on_device):
dtypes, params, indices, axis, batch_dims = params_indices_others
helpers.test_function(
input_dtypes=dtypes,
test_flags=test_flags,
on_device=on_device,
backend_to_test=backend_fw,
fn_name=fn_name,
xs_grad_idxs=[[0, 0]],
params=params,
indices=indices,
axis=axis,
batch_dims=batch_dims,
)
# gather_nd
@handle_test(
fn_tree="functional.ivy.gather_nd",
params_n_ndindices_batch_dims=array_and_ndindices_batch_dims(
array_dtypes=helpers.get_dtypes("numeric"),
indices_dtypes=["int32", "int64"],
allow_inf=False,
),
)
def test_gather_nd(
params_n_ndindices_batch_dims, test_flags, backend_fw, fn_name, on_device
):
dtypes, params, ndindices, batch_dims = params_n_ndindices_batch_dims
helpers.test_function(
input_dtypes=dtypes,
test_flags=test_flags,
on_device=on_device,
backend_to_test=backend_fw,
fn_name=fn_name,
xs_grad_idxs=[[0, 0]],
params=params,
indices=ndindices,
batch_dims=batch_dims,
)
def test_get_all_arrays_in_memory():
return
# get_item
# TODO: add container and array instance methods
@handle_test(
fn_tree="functional.ivy.get_item",
ground_truth_backend="numpy",
dtypes_x_query=helpers.dtype_array_query(
available_dtypes=helpers.get_dtypes("valid"),
),
copy=st.booleans(),
test_with_out=st.just(False),
test_gradients=st.just(False),
test_instance_method=st.just(False),
container_flags=st.just([False]),
test_with_copy=st.just(True),
)
def test_get_item(
dtypes_x_query,
copy,
test_flags,
backend_fw,
fn_name,
on_device,
):
dtypes, x, query = dtypes_x_query
try:
helpers.test_function(
input_dtypes=dtypes,
test_flags=test_flags,
on_device=on_device,
backend_to_test=backend_fw,
fn_name=fn_name,
x=x,
query=query,
copy=copy,
)
except ivy.utils.exceptions.IvyBackendException as e:
if backend_fw == "paddle" and "only supports access to dimension 0 to 9" in e:
assume(False)
else:
raise
# get_min_base
def test_get_min_base():
assert ivy.min_base == 1e-5
# get_min_denominator
def test_get_min_denominator():
assert ivy.min_denominator == 1e-12
# get_num_dims
@handle_test(
fn_tree="functional.ivy.get_num_dims",
x0_n_x1_n_res=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("valid")
),
as_array=st.booleans(),
test_with_out=st.just(False),
test_gradients=st.just(False),
)
def test_get_num_dims(
x0_n_x1_n_res, as_array, test_flags, backend_fw, fn_name, on_device
):
dtype, x = x0_n_x1_n_res
helpers.test_function(
input_dtypes=dtype,
test_flags=test_flags,
on_device=on_device,
backend_to_test=backend_fw,
fn_name=fn_name,
x=x[0],
as_array=as_array,
)
# get_queue_timeout
@given(
x=st.floats(allow_nan=False, allow_infinity=False),
)
def test_get_queue_timeout(x):
ivy.set_queue_timeout(x)
ret = ivy.queue_timeout
assert ret == x
# get_referrers_recursive
def test_get_referrers_recursive():
class SomeClass:
def __init__(self):
self.x = [1, 2]
self.y = [self.x]
some_obj = SomeClass()
refs = ivy.get_referrers_recursive(some_obj.x)
ref_keys = refs.keys()
assert len(ref_keys) == 3
assert "repr" in ref_keys
assert refs["repr"] == "[1,2]"
y_id = str(id(some_obj.y))
y_refs = refs[y_id]
assert y_refs["repr"] == "[[1,2]]"
some_obj_dict_id = str(id(some_obj.__dict__))
assert y_refs[some_obj_dict_id] == "tracked"
dict_refs = refs[some_obj_dict_id]
assert dict_refs["repr"] == "{'x':[1,2],'y':[[1,2]]}"
some_obj_id = str(id(some_obj))
some_obj_refs = dict_refs[some_obj_id]
assert some_obj_refs["repr"] == str(some_obj).replace(" ", "")
assert len(some_obj_refs) == 1
# get_tmp_dir
def test_get_tmp_dir():
ret = ivy.tmp_dir
assert ret == "/tmp"
# has_nans
@handle_test(
fn_tree="functional.ivy.has_nans",
x_val_and_dtypes=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("float"),
allow_nan=True,
allow_inf=True,
),
include_infs=st.booleans(),
test_with_out=st.just(False),
test_gradients=st.just(False),
)
def test_has_nans(
*, x_val_and_dtypes, include_infs, test_flags, backend_fw, fn_name, on_device
):
dtype, x = x_val_and_dtypes
helpers.test_function(
input_dtypes=dtype,
test_flags=test_flags,
on_device=on_device,
backend_to_test=backend_fw,
fn_name=fn_name,
x=x[0],
include_infs=include_infs,
)
def test_inplace_arrays_supported(backend_fw):
with BackendHandler.update_backend(backend_fw) as ivy_backend:
if backend_fw in ["numpy", "torch"]:
assert ivy_backend.inplace_arrays_supported()
elif backend_fw in ["jax", "tensorflow", "paddle"]:
assert not ivy_backend.inplace_arrays_supported()
else:
raise RuntimeError("Unrecognized framework")
# inplace_decrement
@handle_test(
fn_tree="functional.ivy.inplace_decrement",
x_val_and_dtypes=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("numeric"),
allow_inf=False,
min_num_dims=1,
max_num_dims=1,
min_dim_size=2,
num_arrays=2,
shared_dtype=True,
safety_factor_scale="log",
),
)
def test_inplace_decrement(x_val_and_dtypes, test_flags, on_device, backend_fw):
dtype = x_val_and_dtypes[0][0]
x, val = x_val_and_dtypes[1]
x, val = x.tolist(), val.tolist()
with BackendHandler.update_backend(backend_fw) as ivy_backend:
x = ivy_backend.array(x, dtype=dtype, device=on_device)
val = ivy_backend.array(val, dtype=dtype, device=on_device)
new_val = x - val
supports_update = _supports_inplace_update(ivy_backend, test_flags)
if supports_update:
x_inplace = ivy_backend.inplace_decrement(x, val)
assert id(x_inplace) == id(x)
x = helpers.flatten_and_to_np(ret=x, backend=backend_fw)
new_val = helpers.flatten_and_to_np(ret=new_val, backend=backend_fw)
helpers.value_test(
ret_np_flat=x, ret_np_from_gt_flat=new_val, backend=backend_fw
)
# inplace_increment
@handle_test(
fn_tree="functional.ivy.inplace_increment",
x_val_and_dtypes=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("numeric"),
allow_inf=False,
min_num_dims=1,
max_num_dims=1,
min_dim_size=2,
num_arrays=2,
shared_dtype=True,
),
)
def test_inplace_increment(x_val_and_dtypes, test_flags, on_device, backend_fw):
dtype = x_val_and_dtypes[0][0]
with BackendHandler.update_backend(backend_fw) as ivy_backend:
if dtype in ivy_backend.function_unsupported_dtypes(
ivy_backend.inplace_increment
):
return
x, val = x_val_and_dtypes[1]
x, val = x.tolist(), val.tolist()
x = ivy_backend.array(x, dtype=dtype, device=on_device)
val = ivy_backend.array(val, dtype=dtype, device=on_device)
new_val = x + val
supports_update = _supports_inplace_update(ivy_backend, test_flags)
if supports_update:
x_inplace = ivy_backend.inplace_increment(x, val)
assert id(x_inplace) == id(x)
x = helpers.flatten_and_to_np(ret=x, backend=backend_fw)
new_val = helpers.flatten_and_to_np(ret=new_val, backend=backend_fw)
helpers.value_test(
ret_np_flat=x, ret_np_from_gt_flat=new_val, backend=backend_fw
)
# inplace_update
@handle_test(
fn_tree="functional.ivy.inplace_update",
x_val_and_dtypes=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("numeric"),
num_arrays=2,
shared_dtype=True,
),
keep_x_dtype=st.booleans(),
inplace_mode=st.just("lenient"),
)
def test_inplace_update(
x_val_and_dtypes, keep_x_dtype, inplace_mode, test_flags, on_device, backend_fw
):
with BackendHandler.update_backend(backend_fw) as ivy_backend:
dtype = x_val_and_dtypes[0][0]
if dtype in ivy_backend.function_unsupported_dtypes(ivy_backend.inplace_update):
return
x, val = x_val_and_dtypes[1]
x = ivy_backend.array(x.tolist(), dtype=dtype, device=on_device)
val = ivy_backend.array(val.tolist(), dtype=dtype, device=on_device)
ivy_backend.set_inplace_mode(inplace_mode)
supports_update = _supports_inplace_update(ivy_backend, test_flags)
if supports_update or ivy_backend.inplace_mode == "lenient":
if keep_x_dtype:
x_dtype = x.dtype
x_inplace = ivy_backend.inplace_update(x, val, keep_input_dtype=True)
assert x_dtype == x_inplace.dtype
else:
x_inplace = ivy_backend.inplace_update(x, val)
assert id(x_inplace) == id(x)
x = helpers.flatten_and_to_np(backend=backend_fw, ret=x)
val = helpers.flatten_and_to_np(backend=backend_fw, ret=val)
helpers.value_test(
backend=backend_fw, ret_np_flat=x, ret_np_from_gt_flat=val
)
elif not supports_update and ivy_backend.inplace_mode == "strict":
with pytest.raises(ivy.utils.exceptions.InplaceUpdateException):
ivy_backend.inplace_update(x, val)
def test_inplace_variables_supported(backend_fw):
with BackendHandler.update_backend(backend_fw) as ivy_backend:
if backend_fw in ["numpy", "torch", "tensorflow"]:
assert ivy_backend.inplace_variables_supported()
elif backend_fw in ["jax", "paddle"]:
assert not ivy_backend.inplace_variables_supported()
else:
raise RuntimeError("Unrecognized framework")
# is_array
@handle_test(
fn_tree="functional.ivy.is_array",
x_val_and_dtypes=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("valid")
),
exclusive=st.booleans(),
as_variable_flags=st.just([False]),
container_flags=st.just([False]),
test_with_out=st.just(False),
test_gradients=st.just(False),
)
def test_is_array(
x_val_and_dtypes, exclusive, test_flags, backend_fw, fn_name, on_device
):
dtype, x = x_val_and_dtypes
# as_variable=False as the result can't be consistent across backends
if test_flags.container[0]:
# container instance methods should also not be tested
test_flags.instance_method = False
helpers.test_function(
input_dtypes=dtype,
test_flags=test_flags,
on_device=on_device,
backend_to_test=backend_fw,
fn_name=fn_name,
x=x[0],
exclusive=exclusive,
)
# is_ivy_array
@handle_test(
fn_tree="functional.ivy.is_ivy_array",
x_val_and_dtypes=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("valid")
),
exclusive=st.booleans(),
ground_truth_backend="numpy",
as_variable_flags=st.just([False]),
test_with_out=st.just(False),
test_gradients=st.just(False),
)
def test_is_ivy_array(
*, x_val_and_dtypes, exclusive, test_flags, backend_fw, fn_name, on_device
):
dtype, x = x_val_and_dtypes
# as_variable=False as the result can't be consistent across backends
if test_flags.container[0]:
# container instance methods should also not be tested
test_flags.instance_method = False
helpers.test_function(
input_dtypes=dtype,
test_flags=test_flags,
on_device=on_device,
backend_to_test=backend_fw,
fn_name=fn_name,
x=x[0],
exclusive=exclusive,
)
# is_ivy_container
@handle_test(
fn_tree="functional.ivy.is_ivy_container",
x_val_and_dtypes=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("valid")
),
test_with_out=st.just(False),
test_instance_method=st.just(False),
test_gradients=st.just(False),
)
def test_is_ivy_container(x_val_and_dtypes, test_flags, backend_fw, fn_name, on_device):
dtype, x = x_val_and_dtypes
helpers.test_function(
input_dtypes=dtype,
test_flags=test_flags,
on_device=on_device,
backend_to_test=backend_fw,
fn_name=fn_name,
x=x[0],
)
# is_native_array
@handle_test(
fn_tree="functional.ivy.is_native_array",
x_val_and_dtypes=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("valid")
),
exclusive=st.booleans(),
as_variable_flags=st.just([False]),
container_flags=st.just([False]),
test_with_out=st.just(False),
test_gradients=st.just(False),
)
def test_is_native_array(
*, x_val_and_dtypes, test_flags, exclusive, backend_fw, fn_name, on_device
):
dtype, x = x_val_and_dtypes
# as_variable=False as the result can't be consistent across backends
if test_flags.container[0]:
# container instance methods should also not be tested
test_flags.instance_method = False
helpers.test_function(
input_dtypes=dtype,
test_flags=test_flags,
on_device=on_device,
backend_to_test=backend_fw,
fn_name=fn_name,
x=x[0],
exclusive=exclusive,
)
@handle_test(
fn_tree="functional.ivy.isin",
assume_unique_and_dtype_and_x=_isin_data_generation_helper(),
invert=st.booleans(),
ground_truth_backend="numpy",
test_with_out=st.just(False),
test_gradients=st.just(False),
)
def test_isin(
assume_unique_and_dtype_and_x,
invert,
test_flags,
backend_fw,
on_device,
):
assume_unique, x_and_dtype = assume_unique_and_dtype_and_x
dtypes, values = x_and_dtype
elements, test_elements = values
helpers.test_function(
input_dtypes=dtypes,
test_flags=test_flags,
on_device=on_device,
backend_to_test=backend_fw,
fn_name="isin",
elements=elements,
test_elements=test_elements,
invert=invert,
assume_unique=assume_unique,
)
@handle_test(
fn_tree="functional.ivy.itemsize",
x_and_dtype=helpers.dtype_and_values(available_dtypes=helpers.get_dtypes("valid")),
ground_truth_backend="numpy",
test_instance_method=st.just(False),
test_with_out=st.just(False),
test_gradients=st.just(False),
)
def test_itemsize(x_and_dtype, test_flags, backend_fw, fn_name, on_device):
dtype, x = x_and_dtype
helpers.test_function(
input_dtypes=dtype,
test_flags=test_flags,
on_device=on_device,
backend_to_test=backend_fw,
fn_name=fn_name,
x=x[0],
)
# match_kwargs
@given(allow_duplicates=st.booleans())
def test_match_kwargs(allow_duplicates):
def func_a(a, b, c=2):
pass
def func_b(a, d, e=5):
return None
class ClassA:
def __init__(self, c, f, g=3):
pass
kwargs = {"a": 0, "b": 1, "c": 2, "d": 3, "e": 4, "f": 5, "g": 6}
kwfa, kwfb, kwca = ivy.match_kwargs(
kwargs, func_a, func_b, ClassA, allow_duplicates=allow_duplicates
)
if allow_duplicates:
assert kwfa == {"a": 0, "b": 1, "c": 2}
assert kwfb == {"a": 0, "d": 3, "e": 4}
assert kwca == {"c": 2, "f": 5, "g": 6}
else:
assert kwfa == {"a": 0, "b": 1, "c": 2}
assert kwfb == {"d": 3, "e": 4}
assert kwca == {"f": 5, "g": 6}
def test_num_arrays_in_memory():
return
def test_print_all_arrays_in_memory():
return
# scatter_flat
@handle_test(
fn_tree="functional.ivy.scatter_flat",
x=st.integers(min_value=1, max_value=10).flatmap(
lambda n: st.tuples(
helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("float"),
min_num_dims=1,
max_num_dims=1,
min_dim_size=n,
max_dim_size=n,
),
helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("integer"),
min_value=0,
max_value=max(n - 1, 0),
min_num_dims=1,
max_num_dims=1,
min_dim_size=n,
max_dim_size=n,
).filter(lambda d_n_v: len(set(d_n_v[1][0])) == len(d_n_v[1][0])),
st.integers(min_value=n, max_value=n),
)
),
reduction=st.sampled_from(["sum", "min", "max", "replace"]),
ground_truth_backend="tensorflow",
)
def test_scatter_flat(x, reduction, test_flags, backend_fw, fn_name, on_device):
# scatter_flat throws an error while computing gradients for tensorflow
# this has been fixed in the newer versions of tensorflow (2.10.0 onwards)
if backend_fw == "tensorflow":
grad_support_version = [2, 10, 0]
k = 0
for number in [int(s) for s in tf.__version__.split(".") if s.isdigit()]:
if k > len(grad_support_version):
break
if number < grad_support_version[k]:
test_flags.test_gradients = False
k += 1
(val_dtype, vals), (ind_dtype, ind), size = x
helpers.test_function(
input_dtypes=ind_dtype + val_dtype,
test_flags=test_flags,
xs_grad_idxs=[[0, 1]],
on_device=on_device,
backend_to_test=backend_fw,
fn_name=fn_name,
indices=ind[0],
updates=vals[0],
size=size,
reduction=reduction,
)
# scatter_nd
@handle_test(
fn_tree="functional.ivy.scatter_nd",
x=_values_and_ndindices(
# ToDo: needs support for boolean arrays
array_dtypes=helpers.get_dtypes("numeric"),
indices_dtypes=["int32", "int64"],
x_min_value=0,
x_max_value=0,
min_num_dims=2,
allow_inf=False,
),
reduction=st.sampled_from(["sum", "min", "max", "replace"]),
test_gradients=st.just(False),
)
def test_scatter_nd(x, reduction, test_flags, backend_fw, fn_name, on_device):
(val_dtype, ind_dtype, update_dtype), vals, ind, updates = x
shape = vals.shape
helpers.test_function(
input_dtypes=[ind_dtype, update_dtype],
test_flags=test_flags,
on_device=on_device,
backend_to_test=backend_fw,
fn_name=fn_name,
indices=np.asarray(ind, dtype=ind_dtype),
updates=updates,
shape=shape,
reduction=reduction,
)
# Tests #
# ------#
@pytest.mark.parametrize("mode", ["lenient", "strict"])
def test_set_inplace_mode(mode):
ivy.set_inplace_mode(mode)
assert ivy.inplace_mode == mode
# set_item
# TODO: add container and array instance methods
@handle_test(
fn_tree="functional.ivy.set_item",
ground_truth_backend="numpy",
dtypes_x_query_val=helpers.dtype_array_query_val(
available_dtypes=helpers.get_dtypes("valid"),
),
copy=st.booleans(),
test_with_out=st.just(False),
test_gradients=st.just(False),
test_instance_method=st.just(False),
container_flags=st.just([False]),
test_with_copy=st.just(True),
)
@handle_example(
test_example=True,
test_flags={
"num_positional_args": 3,
},
dtypes_x_query_val=(
["int32", "int32"],
np.ones((1, 3, 3, 3)),
(slice(None, None, None), slice(None, None, None), slice(None, None, None), 1),
np.zeros((3, 1)),
),
copy=False,
fn_name="set_item",
)
def test_set_item(
dtypes_x_query_val,
copy,
test_flags,
backend_fw,
fn_name,
on_device,
):
dtypes, x, query, val = dtypes_x_query_val
helpers.test_function(
input_dtypes=dtypes,
test_flags=test_flags,
on_device=on_device,
backend_to_test=backend_fw,
fn_name=fn_name,
x=x,
query=query,
val=val,
copy=copy,
)
# set_min_base
@given(x=st.floats(allow_nan=False, allow_infinity=False))
def test_set_min_base(x):
ivy.set_min_base(x)
assert ivy.min_base == x
# set_min_denominator
@given(x=st.floats(allow_nan=False, allow_infinity=False))
def test_set_min_denominator(x):
ivy.set_min_denominator(x)
assert ivy.min_denominator == x
# set_queue_timeout
@given(
x=st.floats(allow_nan=False, allow_infinity=False),
)
def test_set_queue_timeout(x):
ivy.set_queue_timeout(x)
ret = ivy.queue_timeout
assert ret == x
# set_tmp_dir
def test_set_tmp_dir():
ivy.set_tmp_dir("/new_dir")
ret = ivy.tmp_dir
assert ret == "/new_dir"
# shape
# TODO: add container and array methods
@handle_test(
fn_tree="functional.ivy.shape",
x0_n_x1_n_res=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("valid")
),
as_array=st.booleans(),
test_with_out=st.just(False),
test_instance_method=st.just(False),
test_gradients=st.just(False),
)
def test_shape(x0_n_x1_n_res, as_array, test_flags, backend_fw, fn_name, on_device):
dtype, x = x0_n_x1_n_res
# instance_method=False because the shape property would overwrite the shape method
helpers.test_function(
input_dtypes=dtype,
test_flags=test_flags,
on_device=on_device,
backend_to_test=backend_fw,
fn_name=fn_name,
x=x[0],
as_array=as_array,
)
# stable_divide
@handle_test(
fn_tree="functional.ivy.stable_divide",
dtype_and_x=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("numeric"),
num_arrays=3,
shared_dtype=True,
small_abs_safety_factor=8,
large_abs_safety_factor=8,
safety_factor_scale="log",
),
test_with_out=st.just(False),
)
def test_stable_divide(*, dtype_and_x, test_flags, backend_fw, fn_name, on_device):
input_dtype, x = dtype_and_x
helpers.test_function(
input_dtypes=input_dtype,
test_flags=test_flags,
on_device=on_device,
backend_to_test=backend_fw,
fn_name=fn_name,
numerator=x[0],
denominator=x[1],
min_denominator=x[2],
)
# stable_pow
@handle_test(
fn_tree="functional.ivy.stable_pow",
dtypes_and_xs=pow_helper(available_dtypes=_get_valid_numeric_no_unsigned()),
min_base=helpers.floats(
min_value=0, max_value=1, small_abs_safety_factor=8, safety_factor_scale="log"
),
test_with_out=st.just(False),
)
def test_stable_pow(
*, dtypes_and_xs, min_base, test_flags, backend_fw, fn_name, on_device
):
dtypes, xs = dtypes_and_xs
assume(all("bfloat16" not in x for x in dtypes))
helpers.test_function(
input_dtypes=dtypes,
test_flags=test_flags,
on_device=on_device,
backend_to_test=backend_fw,
fn_name=fn_name,
rtol_=1e-1,
atol_=1e-1,
base=xs[0][0],
exponent=np.abs(xs[1]),
min_base=min_base,
)
@handle_test(
fn_tree="functional.ivy.strides",
x_and_dtype=helpers.dtype_and_values(available_dtypes=helpers.get_dtypes("valid")),
test_instance_method=st.just(False),
test_with_out=st.just(False),
test_gradients=st.just(False),
)
def test_strides(x_and_dtype, test_flags, backend_fw, fn_name, on_device):
dtype, x = x_and_dtype
helpers.test_function(
input_dtypes=dtype,
test_flags=test_flags,
on_device=on_device,
backend_to_test=backend_fw,
fn_name=fn_name,
x=x[0],
)
@handle_test(
fn_tree="functional.ivy.supports_inplace_updates",
x_val_and_dtypes=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("valid")
),
test_with_out=st.just(False),
test_gradients=st.just(False),
)
def test_supports_inplace_updates(
x_val_and_dtypes, test_flags, backend_fw, fn_name, on_device
):
dtype, x = x_val_and_dtypes
helpers.test_function(
input_dtypes=dtype,
test_flags=test_flags,
on_device=on_device,
backend_to_test=backend_fw,
fn_name=fn_name,
test_values=False,
x=x[0],
)
# to_list
@handle_test(
fn_tree="functional.ivy.to_list",
x0_n_x1_n_res=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("valid"),
large_abs_safety_factor=20,
),
test_with_out=st.just(False),
test_gradients=st.just(False),
)
def test_to_list(x0_n_x1_n_res, test_flags, backend_fw, fn_name, on_device):
dtype, x = x0_n_x1_n_res
helpers.test_function(
input_dtypes=dtype,
test_flags=test_flags,
on_device=on_device,
backend_to_test=backend_fw,
fn_name=fn_name,
x=x[0],
)
# to_numpy
@handle_test(
fn_tree="functional.ivy.to_numpy",
dtype_x=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("valid"),
),
copy=st.booleans(),
test_with_out=st.just(False),
test_gradients=st.just(False),
test_with_copy=st.just(True),
)
def test_to_numpy(*, dtype_x, copy, test_flags, backend_fw, fn_name, on_device):
dtype, x = dtype_x
# torch throws an exception
if ivy.current_backend_str() == "torch" and not copy:
return
helpers.test_function(
input_dtypes=dtype,
test_flags=test_flags,
on_device=on_device,
backend_to_test=backend_fw,
fn_name=fn_name,
x=x[0],
copy=copy,
)
# to_scalar
@handle_test(
fn_tree="functional.ivy.to_scalar",
x0_n_x1_n_res=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("valid"),
min_num_dims=1,
max_num_dims=1,
min_dim_size=1,
max_dim_size=1,
large_abs_safety_factor=20,
),
test_with_out=st.just(False),
test_gradients=st.just(False),
test_with_copy=st.just(True),
)
def test_to_scalar(x0_n_x1_n_res, test_flags, backend_fw, fn_name, on_device):
dtype, x = x0_n_x1_n_res
helpers.test_function(
input_dtypes=dtype,
test_flags=test_flags,
on_device=on_device,
backend_to_test=backend_fw,
fn_name=fn_name,
x=x[0],
)
# try_else_none
@given(
x=st.booleans(),
)
def test_try_else_none(x):
if x:
fn = ivy.try_else_none(lambda: True)
assert fn() is True
else:
fn = ivy.try_else_none(lambda x: x)
assert fn is None
@pytest.mark.parametrize("mode", ["lenient", "strict"])
def test_unset_inplace_mode(mode):
ivy.set_inplace_mode(mode)
ivy.unset_inplace_mode()
assert ivy.inplace_mode == "lenient"
def test_use_within_use_framework():
with ivy.functional.backends.numpy.use:
pass
with ivy.functional.backends.jax.use:
pass
with ivy.functional.backends.tensorflow.use:
pass
with ivy.functional.backends.torch.use:
pass
# value_is_nan
@handle_test(
fn_tree="functional.ivy.value_is_nan",
val_dtype=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("float"),
max_dim_size=1,
max_num_dims=1,
allow_nan=True,
allow_inf=True,
),
include_infs=st.booleans(),
test_with_out=st.just(False),
test_gradients=st.just(False),
)
def test_value_is_nan(
*, val_dtype, include_infs, test_flags, backend_fw, fn_name, on_device
):
dtype, val = val_dtype
helpers.test_function(
input_dtypes=dtype,
test_flags=test_flags,
on_device=on_device,
backend_to_test=backend_fw,
fn_name=fn_name,
x=val[0],
include_infs=include_infs,
)
# vmap
@handle_test(
fn_tree="functional.ivy.vmap",
func=st.sampled_from([_fn1, _fn2, _fn3]),
dtype_and_arrays_and_axes=helpers.arrays_and_axes(
allow_none=False,
min_num_dims=1,
max_num_dims=5,
min_dim_size=1,
max_dim_size=10,
num=2,
return_dtype=True,
),
in_axes_as_cont=st.booleans(),
)
def test_vmap(func, dtype_and_arrays_and_axes, in_axes_as_cont, backend_fw):
with BackendHandler.update_backend(backend_fw) as ivy_backend:
dtype, generated_arrays, in_axes = dtype_and_arrays_and_axes
arrays = [ivy_backend.native_array(array) for array in generated_arrays]
assume(
ivy_backend.as_ivy_dtype(dtype[0])
not in ivy_backend.function_unsupported_dtypes(ivy_backend.vmap)
)
if in_axes_as_cont:
vmapped_func = ivy_backend.vmap(func, in_axes=in_axes, out_axes=0)
else:
vmapped_func = ivy_backend.vmap(func, in_axes=0, out_axes=0)
assert callable(vmapped_func)
try:
fw_res = helpers.flatten_and_to_np(
ret=vmapped_func(*arrays), backend=backend_fw
)
fw_res = fw_res if len(fw_res) else None
except Exception:
fw_res = None
with BackendHandler.update_backend("jax") as gt_backend:
arrays = [gt_backend.native_array(array) for array in generated_arrays]
if in_axes_as_cont:
jax_vmapped_func = gt_backend.vmap(func, in_axes=in_axes, out_axes=0)
else:
jax_vmapped_func = gt_backend.vmap(func, in_axes=0, out_axes=0)
assert callable(jax_vmapped_func)
try:
jax_res = helpers.flatten_and_to_np(
ret=jax_vmapped_func(*arrays), backend="jax"
)
jax_res = jax_res if len(jax_res) else None
except Exception:
jax_res = None
if fw_res is not None and jax_res is not None:
helpers.value_test(
backend=backend_fw,
ground_truth_backend="jax",
ret_np_flat=fw_res,
ret_np_from_gt_flat=jax_res,
rtol=1e-1,
atol=1e-1,
)
elif fw_res is None and jax_res is None:
pass
else:
assert False, "One of the results is None while other isn't"
_composition_1.test_unsupported_devices_and_dtypes = {
"cpu": {
"numpy": ("bfloat16",),
"jax": ("complex64", "complex128"),
"tensorflow": ("complex64", "complex128"),
"torch": (
"uint16",
"uint32",
"uint64",
"float16",
"complex64",
"complex128",
),
"paddle": ("uint16", "uint32", "uint64", "bfloat16", "complex64", "complex128"),
},
"gpu": {
"numpy": ivy.all_dtypes,
"jax": ("complex64", "complex128"),
"tensorflow": ("complex64", "complex128"),
"torch": ("complex64", "float16", "uint16", "complex128", "uint64", "uint32"),
"paddle": ivy.all_dtypes,
},
"tpu": {
"numpy": ivy.all_dtypes,
"jax": ivy.all_dtypes,
"tensorflow": ivy.all_dtypes,
"torch": ivy.all_dtypes,
"paddle": ivy.all_dtypes,
},
}
_composition_2.test_unsupported_devices_and_dtypes = {
"cpu": {
"numpy": ("bfloat16", "complex64", "complex128"),
"jax": ("complex64", "complex128"),
"tensorflow": ("complex64", "complex128"),
"torch": ("uint16", "uint32", "uint64", "float16", "complex64", "complex128"),
"paddle": (
"uint16",
"uint32",
"uint64",
"bfloat16",
),
},
"gpu": {
"numpy": ivy.all_dtypes,
"jax": ("complex64", "complex128"),
"tensorflow": ("complex64", "complex128"),
"torch": ("uint16", "uint64", "uint32", "complex128", "float16", "complex64"),
"paddle": ivy.all_dtypes,
},
"tpu": {
"numpy": ivy.all_dtypes,
"jax": ivy.all_dtypes,
"tensorflow": ivy.all_dtypes,
"torch": ivy.all_dtypes,
"paddle": ivy.all_dtypes,
},
}
| ivy/ivy_tests/test_ivy/test_functional/test_core/test_general.py/0 | {
"file_path": "ivy/ivy_tests/test_ivy/test_functional/test_core/test_general.py",
"repo_id": "ivy",
"token_count": 29777
} | 67 |
# global
from hypothesis import assume, strategies as st
# local
import ivy
import ivy_tests.test_ivy.helpers as helpers
from ivy_tests.test_ivy.helpers import handle_test
# --- Helpers --- #
# --------------- #
# float_power_helper
@st.composite
def _float_power_helper(draw, *, available_dtypes=None):
if available_dtypes is None:
available_dtypes = helpers.get_dtypes("numeric")
dtype1, x1 = draw(
helpers.dtype_and_values(
available_dtypes=available_dtypes,
small_abs_safety_factor=16,
large_abs_safety_factor=16,
safety_factor_scale="log",
)
)
dtype2 = draw(helpers.get_dtypes("numeric"))
if ivy.is_int_dtype(dtype2[0]):
min_value = 0
else:
min_value = -10
dtype2, x2 = draw(
helpers.dtype_and_values(
min_value=min_value,
max_value=10,
dtype=dtype2,
)
)
return (dtype1[0], dtype2[0]), (x1[0], x2[0])
# nansum
@st.composite
def _get_castable_dtypes_values(draw, *, allow_nan=False):
available_dtypes = helpers.get_dtypes("numeric")
shape = draw(helpers.get_shape(min_num_dims=1, max_num_dims=4, max_dim_size=6))
dtype, values = draw(
helpers.dtype_and_values(
available_dtypes=available_dtypes,
num_arrays=1,
large_abs_safety_factor=24,
small_abs_safety_factor=24,
safety_factor_scale="log",
shape=shape,
allow_nan=allow_nan,
)
)
axis = draw(helpers.get_axis(shape=shape, force_int=True))
dtype1, values, dtype2 = draw(
helpers.get_castable_dtype(
draw(helpers.get_dtypes("float")), dtype[0], values[0]
)
)
return [dtype1], [values], axis, dtype2
@st.composite
def _get_dtype_values_axis_for_count_nonzero(
draw,
in_available_dtypes,
out_available_dtypes,
min_num_dims=1,
max_num_dims=10,
min_dim_size=1,
max_dim_size=10,
):
input_dtype, values, axis = draw(
helpers.dtype_values_axis(
available_dtypes=helpers.get_dtypes(in_available_dtypes),
min_num_dims=min_num_dims,
max_num_dims=max_num_dims,
min_dim_size=min_dim_size,
max_dim_size=max_dim_size,
valid_axis=True,
)
)
axis = draw(st.one_of(st.just(axis), st.none()))
output_dtype = draw(st.one_of(helpers.get_dtypes(out_available_dtypes)))
return [input_dtype, output_dtype], values, axis
@st.composite
def _lerp_data_helper(draw):
mixed_fn_compos = draw(st.booleans())
is_torch_backend = ivy.current_backend_str() == "torch"
kwargs = {
"shared_dtype": True,
"large_abs_safety_factor": 2.5,
"small_abs_safety_factor": 2.5,
"safety_factor_scale": "log",
"allow_nan": False,
"allow_inf": False,
}
if is_torch_backend and not mixed_fn_compos:
dtype1, start_end = draw(
helpers.dtype_and_values(
available_dtypes=(
helpers.get_dtypes("numeric", mixed_fn_compos=mixed_fn_compos)
),
num_arrays=2,
**kwargs,
)
)
dtype2, weight = draw(
helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes(
"integer", mixed_fn_compos=mixed_fn_compos
),
num_arrays=1,
**kwargs,
)
)
input_dtypes = dtype1 + dtype2
inputs = start_end + weight
else:
input_dtypes, inputs = draw(
helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes(
"valid", mixed_fn_compos=mixed_fn_compos
),
num_arrays=3,
**kwargs,
)
)
return input_dtypes, inputs[0], inputs[1], inputs[2]
@st.composite
def _sparsify_tensor_stg(draw):
dtype, tensor, shape = draw(
helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("numeric"),
ret_shape=True,
min_num_dims=1,
min_dim_size=1,
min_value=10,
)
)
size = 1
for dim in shape:
size *= dim
card = draw(st.integers(min_value=1, max_value=size))
return dtype, tensor[0], card
# ldexp
@st.composite
def ldexp_args(draw):
dtype1, x1 = draw(
helpers.dtype_and_values(
available_dtypes=["float32", "float64"],
num_arrays=1,
shared_dtype=True,
min_value=-100,
max_value=100,
min_num_dims=1,
max_num_dims=3,
)
)
dtype2, x2 = draw(
helpers.dtype_and_values(
available_dtypes=["int32", "int64"],
num_arrays=1,
shared_dtype=True,
min_value=-100,
max_value=100,
min_num_dims=1,
max_num_dims=3,
)
)
return (dtype1[0], dtype2[0]), (x1[0], x2[0])
# --- Main --- #
# ------------ #
# allclose
@handle_test(
fn_tree="functional.ivy.experimental.allclose",
dtype_and_x=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("float"),
large_abs_safety_factor=4,
small_abs_safety_factor=4,
safety_factor_scale="log",
num_arrays=2,
shared_dtype=True,
allow_nan=True,
),
rtol=st.floats(
min_value=1e-05, max_value=1e-01, exclude_min=True, exclude_max=True
),
atol=st.floats(
min_value=1e-08, max_value=1e-01, exclude_min=True, exclude_max=True
),
equal_nan=st.booleans(),
test_gradients=st.just(False),
test_with_out=st.just(False),
)
def test_allclose(
dtype_and_x, rtol, atol, equal_nan, test_flags, backend_fw, fn_name, on_device
):
input_dtype, x = dtype_and_x
assume("bfloat16" not in input_dtype)
helpers.test_function(
input_dtypes=input_dtype,
test_flags=test_flags,
backend_to_test=backend_fw,
fn_name=fn_name,
on_device=on_device,
x1=x[0],
x2=x[1],
rtol=rtol,
atol=atol,
equal_nan=equal_nan,
)
@handle_test(
fn_tree="functional.ivy.experimental.amax",
dtype_and_x=helpers.dtype_values_axis(
available_dtypes=helpers.get_dtypes("valid"),
large_abs_safety_factor=2,
small_abs_safety_factor=2,
safety_factor_scale="log",
min_num_dims=1,
max_num_dims=5,
min_dim_size=2,
valid_axis=True,
allow_neg_axes=True,
min_axes_size=1,
min_value=None,
max_value=None,
allow_nan=False,
),
keep_dims=st.booleans(),
)
def test_amax(*, dtype_and_x, keep_dims, test_flags, backend_fw, fn_name, on_device):
input_dtype, x, axis = dtype_and_x
helpers.test_function(
input_dtypes=input_dtype,
test_flags=test_flags,
backend_to_test=backend_fw,
fn_name=fn_name,
on_device=on_device,
x=x[0],
axis=axis,
keepdims=keep_dims,
)
@handle_test(
fn_tree="functional.ivy.experimental.amin",
dtype_and_x=helpers.dtype_values_axis(
available_dtypes=helpers.get_dtypes("valid"),
large_abs_safety_factor=2,
small_abs_safety_factor=2,
safety_factor_scale="log",
min_num_dims=1,
max_num_dims=5,
min_dim_size=2,
valid_axis=True,
allow_neg_axes=True,
min_axes_size=1,
min_value=None,
max_value=None,
allow_nan=False,
),
keep_dims=st.booleans(),
)
def test_amin(*, dtype_and_x, keep_dims, test_flags, backend_fw, fn_name, on_device):
input_dtype, x, axis = dtype_and_x
helpers.test_function(
input_dtypes=input_dtype,
test_flags=test_flags,
backend_to_test=backend_fw,
fn_name=fn_name,
on_device=on_device,
x=x[0],
axis=axis,
keepdims=keep_dims,
)
@handle_test(
fn_tree="functional.ivy.experimental.binarizer",
dtype_and_x=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("numeric")
),
threshold=helpers.floats(),
container_flags=st.just([False]),
)
def test_binarizer(
*, dtype_and_x, threshold, test_flags, backend_fw, fn_name, on_device
):
input_dtype, x = dtype_and_x
helpers.test_function(
input_dtypes=input_dtype,
test_flags=test_flags,
backend_to_test=backend_fw,
fn_name=fn_name,
on_device=on_device,
x=x[0],
threshold=threshold,
)
# conj
@handle_test(
fn_tree="conj",
dtype_and_x=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("real_and_complex")
),
test_with_out=st.just(False),
)
def test_conj(*, dtype_and_x, test_flags, backend_fw, fn_name, on_device):
input_dtype, x = dtype_and_x
helpers.test_function(
input_dtypes=input_dtype,
test_flags=test_flags,
backend_to_test=backend_fw,
fn_name=fn_name,
on_device=on_device,
x=x[0],
)
# copysign
@handle_test(
fn_tree="functional.ivy.experimental.copysign",
dtype_x1_x2=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("float"),
num_arrays=2,
min_num_dims=0,
allow_nan=False,
shared_dtype=False,
),
test_gradients=st.just(False),
)
def test_copysign(dtype_x1_x2, test_flags, backend_fw, fn_name, on_device):
(x1_dtype, x2_dtype), (x1, x2) = dtype_x1_x2
helpers.test_function(
input_dtypes=[x1_dtype, x2_dtype],
test_flags=test_flags,
on_device=on_device,
backend_to_test=backend_fw,
fn_name=fn_name,
x1=x1,
x2=x2,
)
# count_nonzero
@handle_test(
fn_tree="functional.ivy.experimental.count_nonzero",
dtype_values_axis=_get_dtype_values_axis_for_count_nonzero(
in_available_dtypes="integer",
out_available_dtypes="integer",
min_num_dims=1,
max_num_dims=10,
min_dim_size=1,
max_dim_size=10,
),
keepdims=st.booleans(),
test_with_out=st.just(False),
test_gradients=st.just(False),
)
def test_count_nonzero(
*, dtype_values_axis, keepdims, test_flags, on_device, fn_name, backend_fw
):
i_o_dtype, a, axis = dtype_values_axis
helpers.test_function(
input_dtypes=i_o_dtype[0],
test_flags=test_flags,
on_device=on_device,
backend_to_test=backend_fw,
fn_name=fn_name,
a=a[0],
axis=axis,
keepdims=keepdims,
dtype=i_o_dtype[1][0],
)
# diff
@handle_test(
fn_tree="functional.ivy.experimental.diff",
dtype_n_x_n_axis=helpers.dtype_values_axis(
available_dtypes=st.shared(helpers.get_dtypes("valid"), key="dtype"),
min_num_dims=1,
valid_axis=True,
force_int_axis=True,
),
dtype_prepend=helpers.dtype_and_values(
available_dtypes=st.shared(helpers.get_dtypes("valid"), key="dtype"),
min_num_dims=1,
max_num_dims=1,
),
dtype_append=helpers.dtype_and_values(
available_dtypes=st.shared(helpers.get_dtypes("valid"), key="dtype"),
min_num_dims=1,
max_num_dims=1,
),
n=st.integers(min_value=0, max_value=5),
test_gradients=st.just(False),
)
def test_diff(
*,
dtype_n_x_n_axis,
n,
dtype_prepend,
dtype_append,
test_flags,
backend_fw,
fn_name,
on_device,
):
input_dtype, x, axis = dtype_n_x_n_axis
_, prepend = dtype_prepend
_, append = dtype_append
helpers.test_function(
input_dtypes=input_dtype,
test_flags=test_flags,
backend_to_test=backend_fw,
fn_name=fn_name,
on_device=on_device,
x=x[0],
n=n,
axis=axis,
prepend=prepend[0],
append=append[0],
)
# digamma
@handle_test(
fn_tree="functional.ivy.experimental.digamma",
dtype_and_x=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("float"),
min_value=-10,
max_value=10,
min_num_dims=1,
max_num_dims=3,
min_dim_size=1,
max_dim_size=3,
).filter(lambda x: "bfloat16" not in x[0] and "float16" not in x[0]),
ground_truth_backend="tensorflow",
)
def test_digamma(
dtype_and_x,
backend_fw,
test_flags,
fn_name,
on_device,
):
input_dtype, x = dtype_and_x
helpers.test_function(
input_dtypes=input_dtype,
backend_to_test=backend_fw,
test_flags=test_flags,
fn_name=fn_name,
on_device=on_device,
x=x[0],
)
# erfc
@handle_test(
fn_tree="functional.ivy.experimental.erfc",
dtype_and_x=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("float"),
),
test_with_out=st.just(False),
test_gradients=st.just(
False
), # paddle: (Fatal) There is no grad op for inputs:[0] or it'stop_gradient`=True. # noqa
test_instance_method=st.just(True),
)
def test_erfc(
*,
dtype_and_x,
backend_fw,
test_flags,
fn_name,
on_device,
):
input_dtype, x = dtype_and_x
helpers.test_function(
input_dtypes=input_dtype,
backend_to_test=backend_fw,
test_flags=test_flags,
fn_name=fn_name,
on_device=on_device,
x=x[0],
)
# erfinv
@handle_test(
fn_tree="functional.ivy.experimental.erfinv",
dtype_and_x=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("float"),
min_value=-1,
max_value=1,
abs_smallest_val=1e-05,
),
)
def test_erfinv(
*,
dtype_and_x,
backend_fw,
test_flags,
fn_name,
on_device,
):
input_dtype, x = dtype_and_x
if on_device == "cpu":
assume("float16" not in input_dtype and "bfloat16" not in input_dtype)
test_values = True
if backend_fw == "numpy":
# the numpy backend requires an approximation which doesn't pass the value tests
test_values = False
helpers.test_function(
input_dtypes=input_dtype,
backend_to_test=backend_fw,
test_flags=test_flags,
fn_name=fn_name,
on_device=on_device,
test_values=test_values,
x=x[0],
)
# fix
@handle_test(
fn_tree="functional.ivy.experimental.fix",
dtype_and_x=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("float"),
min_num_dims=1,
max_num_dims=3,
min_dim_size=1,
max_dim_size=3,
),
test_gradients=st.just(False),
)
def test_fix(dtype_and_x, test_flags, backend_fw, fn_name, on_device):
input_dtype, x = dtype_and_x
helpers.test_function(
input_dtypes=input_dtype,
test_flags=test_flags,
backend_to_test=backend_fw,
fn_name=fn_name,
on_device=on_device,
x=x[0],
)
# float_power
@handle_test(
fn_tree="functional.ivy.experimental.float_power",
dtype_and_x=_float_power_helper(),
test_gradients=st.just(False),
)
def test_float_power(dtype_and_x, test_flags, backend_fw, fn_name, on_device):
input_dtypes, x = dtype_and_x
helpers.test_function(
input_dtypes=input_dtypes,
test_flags=test_flags,
on_device=on_device,
backend_to_test=backend_fw,
fn_name=fn_name,
x1=x[0],
x2=x[1],
rtol_=1e-1,
atol_=1e-1,
)
# fmax
@handle_test(
fn_tree="functional.ivy.experimental.fmax",
dtype_and_x=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("valid"),
large_abs_safety_factor=4,
small_abs_safety_factor=4,
safety_factor_scale="log",
num_arrays=2,
),
test_gradients=st.just(False),
)
def test_fmax(dtype_and_x, test_flags, backend_fw, fn_name, on_device):
input_dtype, x = dtype_and_x
helpers.test_function(
input_dtypes=input_dtype,
test_flags=test_flags,
on_device=on_device,
backend_to_test=backend_fw,
fn_name=fn_name,
x1=x[0],
x2=x[1],
)
# frexp
@handle_test(
fn_tree="functional.ivy.experimental.frexp",
dtype_and_x=helpers.dtype_and_values(
available_dtypes=["float32", "float64"],
num_arrays=1,
shared_dtype=True,
min_value=-100,
max_value=100,
min_num_dims=1,
max_num_dims=3,
),
test_gradients=st.just(False),
)
def test_frexp(*, dtype_and_x, test_flags, backend_fw, fn_name, on_device):
input_dtype, x = dtype_and_x
helpers.test_function(
input_dtypes=input_dtype,
test_flags=test_flags,
backend_to_test=backend_fw,
fn_name=fn_name,
on_device=on_device,
x=x[0],
)
# gradient
@handle_test(
fn_tree="functional.ivy.experimental.gradient",
dtype_n_x_n_axis=helpers.dtype_values_axis(
available_dtypes=helpers.get_dtypes("valid"),
min_num_dims=1,
max_num_dims=3,
min_dim_size=2,
max_dim_size=4,
valid_axis=True,
force_int_axis=True,
),
spacing=helpers.ints(
min_value=-3,
max_value=3,
),
edge_order=st.sampled_from([1, 2]),
test_with_out=st.just(False),
test_gradients=st.just(False),
)
def test_gradient(
*,
dtype_n_x_n_axis,
spacing,
test_flags,
backend_fw,
fn_name,
on_device,
edge_order,
):
input_dtype, x, axis = dtype_n_x_n_axis
helpers.test_function(
input_dtypes=input_dtype,
test_flags=test_flags,
backend_to_test=backend_fw,
on_device=on_device,
fn_name=fn_name,
x=x[0],
spacing=spacing,
axis=axis,
edge_order=edge_order,
)
# hypot
@handle_test(
fn_tree="functional.ivy.experimental.hypot",
dtype_and_x=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("float"),
num_arrays=2,
shared_dtype=True,
min_value=-100,
max_value=100,
min_num_dims=1,
max_num_dims=3,
),
test_gradients=st.just(False),
)
def test_hypot(dtype_and_x, test_flags, backend_fw, fn_name, on_device):
input_dtype, x = dtype_and_x
helpers.test_function(
input_dtypes=input_dtype,
test_flags=test_flags,
backend_to_test=backend_fw,
fn_name=fn_name,
on_device=on_device,
atol_=1e-2,
x1=x[0],
x2=x[1],
)
# isclose
@handle_test(
fn_tree="functional.ivy.experimental.isclose",
dtype_and_x=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("float"),
large_abs_safety_factor=4,
small_abs_safety_factor=4,
safety_factor_scale="log",
num_arrays=2,
shared_dtype=True,
allow_nan=True,
),
rtol=st.floats(
min_value=1e-05, max_value=1e-01, exclude_min=True, exclude_max=True
),
atol=st.floats(
min_value=1e-08, max_value=1e-01, exclude_min=True, exclude_max=True
),
equal_nan=st.booleans(),
test_gradients=st.just(False),
)
def test_isclose(
*, dtype_and_x, rtol, atol, equal_nan, test_flags, backend_fw, fn_name, on_device
):
input_dtype, x = dtype_and_x
helpers.test_function(
input_dtypes=input_dtype,
test_flags=test_flags,
on_device=on_device,
backend_to_test=backend_fw,
fn_name=fn_name,
a=x[0],
b=x[1],
rtol=rtol,
atol=atol,
equal_nan=equal_nan,
)
@handle_test(
fn_tree="functional.ivy.experimental.ldexp",
dtype_and_x=ldexp_args(),
test_gradients=st.just(False),
)
def test_ldexp(*, dtype_and_x, test_flags, backend_fw, fn_name, on_device):
input_dtype, x = dtype_and_x
helpers.test_function(
input_dtypes=input_dtype,
test_flags=test_flags,
backend_to_test=backend_fw,
fn_name=fn_name,
on_device=on_device,
x1=x[0],
x2=x[1],
)
# lerp
@handle_test(
fn_tree="functional.ivy.experimental.lerp",
data=_lerp_data_helper(),
test_gradients=st.just(False),
)
def test_lerp(
*,
data,
test_flags,
backend_fw,
fn_name,
on_device,
):
input_dtypes, start, end, weight = data
helpers.test_function(
input_dtypes=input_dtypes,
test_flags=test_flags,
backend_to_test=backend_fw,
fn_name=fn_name,
atol_=1e-01,
rtol_=1e-01,
on_device=on_device,
input=start,
end=end,
weight=weight,
)
# lgamma
@handle_test(
fn_tree="functional.ivy.experimental.lgamma",
dtype_and_x=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("float"),
safety_factor_scale="log",
),
test_gradients=st.just(False),
)
def test_lgamma(
*,
dtype_and_x,
test_flags,
backend_fw,
fn_name,
on_device,
):
input_dtype, x = dtype_and_x
helpers.test_function(
input_dtypes=input_dtype,
backend_to_test=backend_fw,
test_flags=test_flags,
on_device=on_device,
fn_name=fn_name,
x=x[0],
)
# modf
@handle_test(
fn_tree="functional.ivy.experimental.modf",
dtype_and_x=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("valid"),
num_arrays=1,
min_value=0,
exclude_min=True,
),
test_with_out=st.just(False),
)
def test_modf(
*,
dtype_and_x,
backend_fw,
test_flags,
fn_name,
on_device,
):
input_dtype, x = dtype_and_x
helpers.test_function(
input_dtypes=input_dtype,
backend_to_test=backend_fw,
test_flags=test_flags,
fn_name=fn_name,
on_device=on_device,
x=x[0],
)
# nansum
@handle_test(
fn_tree="functional.ivy.experimental.nansum",
dtype_x_axis_dtype=_get_castable_dtypes_values(allow_nan=True),
keep_dims=st.booleans(),
test_gradients=st.just(False),
)
def test_nansum(
*, dtype_x_axis_dtype, keep_dims, test_flags, on_device, fn_name, backend_fw
):
input_dtype, x, axis, dtype = dtype_x_axis_dtype
helpers.test_function(
input_dtypes=input_dtype,
test_flags=test_flags,
backend_to_test=backend_fw,
on_device=on_device,
fn_name=fn_name,
x=x[0],
axis=axis,
keepdims=keep_dims,
dtype=dtype,
)
# nextafter
@handle_test(
fn_tree="functional.ivy.experimental.nextafter",
dtype_and_x=helpers.dtype_and_values(
available_dtypes=["float32", "float64"],
num_arrays=2,
shared_dtype=True,
min_value=-10,
max_value=10,
min_num_dims=1,
max_num_dims=3,
),
test_gradients=st.just(False),
)
def test_nextafter(*, dtype_and_x, test_flags, backend_fw, fn_name, on_device):
input_dtype, x = dtype_and_x
helpers.test_function(
input_dtypes=input_dtype,
test_flags=test_flags,
backend_to_test=backend_fw,
fn_name=fn_name,
on_device=on_device,
x1=x[0],
x2=x[1],
)
# sinc
@handle_test(
fn_tree="functional.ivy.experimental.sinc",
dtype_and_x=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("float"),
large_abs_safety_factor=4,
small_abs_safety_factor=4,
),
test_gradients=st.just(False),
)
def test_sinc(*, dtype_and_x, test_flags, backend_fw, fn_name, on_device):
input_dtype, x = dtype_and_x
helpers.test_function(
input_dtypes=input_dtype,
test_flags=test_flags,
atol_=1e-02,
on_device=on_device,
backend_to_test=backend_fw,
fn_name=fn_name,
x=x[0],
)
# sparsify_tensor
@handle_test(
fn_tree="functional.ivy.experimental.sparsify_tensor",
tensor_data=_sparsify_tensor_stg(),
)
def test_sparsify_tensor(
tensor_data,
test_flags,
on_device,
fn_name,
backend_fw,
):
dtype, tensor, card = tensor_data
helpers.test_function(
backend_to_test=backend_fw,
test_flags=test_flags,
on_device=on_device,
fn_name=fn_name,
input_dtypes=dtype,
tensor=tensor,
card=card,
)
# xlogy
@handle_test(
fn_tree="functional.ivy.experimental.xlogy",
dtype_and_x=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("float_and_complex"),
num_arrays=2,
min_value=-10,
max_value=10,
min_num_dims=1,
max_num_dims=3,
),
test_gradients=st.just(False),
)
def test_xlogy(*, dtype_and_x, test_flags, backend_fw, fn_name, on_device):
input_dtype, x = dtype_and_x
helpers.test_function(
input_dtypes=input_dtype,
test_flags=test_flags,
backend_to_test=backend_fw,
fn_name=fn_name,
on_device=on_device,
x=x[0],
y=x[1],
)
# zeta
@handle_test(
fn_tree="functional.ivy.experimental.zeta",
dtype_and_x=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("float"),
num_arrays=2,
shared_dtype=True,
min_value=-10,
max_value=10,
min_num_dims=1,
max_num_dims=3,
),
test_gradients=st.just(False),
)
def test_zeta(
dtype_and_x,
test_flags,
fn_name,
on_device,
backend_fw,
):
input_dtype, x = dtype_and_x
helpers.test_function(
input_dtypes=input_dtype,
test_flags=test_flags,
backend_to_test=backend_fw,
on_device=on_device,
fn_name=fn_name,
rtol_=1e-02,
atol_=1e-02,
x=x[0],
q=x[1],
)
| ivy/ivy_tests/test_ivy/test_functional/test_experimental/test_core/test_elementwise.py/0 | {
"file_path": "ivy/ivy_tests/test_ivy/test_functional/test_experimental/test_core/test_elementwise.py",
"repo_id": "ivy",
"token_count": 13473
} | 68 |
# global
from hypothesis import strategies as st
# local
import ivy_tests.test_ivy.helpers as helpers
from ivy_tests.test_ivy.helpers import handle_test
# celu
@handle_test(
fn_tree="functional.ivy.experimental.celu",
dtype_and_x=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("float_and_complex"),
large_abs_safety_factor=8,
small_abs_safety_factor=8,
safety_factor_scale="log",
),
alpha=st.floats(min_value=0.1, max_value=1.0),
complex_mode=st.sampled_from(["jax", "split", "magnitude"]),
)
def test_celu(
*, dtype_and_x, alpha, complex_mode, test_flags, backend_fw, fn_name, on_device
):
dtype, x = dtype_and_x
helpers.test_function(
input_dtypes=dtype,
backend_to_test=backend_fw,
test_flags=test_flags,
fn_name=fn_name,
on_device=on_device,
rtol_=1e-2,
atol_=1e-2,
x=x[0],
alpha=alpha,
complex_mode=complex_mode,
)
# elu
@handle_test(
fn_tree="functional.ivy.experimental.elu",
dtype_and_x=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("float_and_complex"),
large_abs_safety_factor=8,
small_abs_safety_factor=8,
safety_factor_scale="log",
),
alpha=st.one_of(
st.floats(min_value=0.10, max_value=1.0),
),
)
def test_elu(
*,
dtype_and_x,
alpha,
test_flags,
backend_fw,
fn_name,
on_device,
):
dtype, x = dtype_and_x
helpers.test_function(
input_dtypes=dtype,
backend_to_test=backend_fw,
test_flags=test_flags,
fn_name=fn_name,
on_device=on_device,
x=x[0],
alpha=alpha,
)
# hardshrink
@handle_test(
fn_tree="functional.ivy.experimental.hardshrink",
dtype_and_x=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("valid"),
large_abs_safety_factor=8,
small_abs_safety_factor=8,
safety_factor_scale="log",
),
threshold=st.one_of(
st.floats(min_value=0.0, max_value=1e30),
),
)
def test_hardshrink(
*, dtype_and_x, threshold, test_flags, backend_fw, fn_name, on_device
):
dtype, x = dtype_and_x
helpers.test_function(
input_dtypes=dtype,
backend_to_test=backend_fw,
test_flags=test_flags,
fn_name=fn_name,
on_device=on_device,
x=x[0],
lambd=threshold,
)
# hardsilu
@handle_test(
fn_tree="functional.ivy.experimental.hardsilu",
dtype_and_x=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("float"),
large_abs_safety_factor=8,
small_abs_safety_factor=8,
safety_factor_scale="log",
),
)
def test_hardsilu(*, dtype_and_x, test_flags, backend_fw, fn_name, on_device):
dtype, x = dtype_and_x
helpers.test_function(
input_dtypes=dtype,
backend_to_test=backend_fw,
test_flags=test_flags,
fn_name=fn_name,
on_device=on_device,
x=x[0],
)
# hardtanh
@handle_test(
fn_tree="functional.ivy.experimental.hardtanh",
dtype_and_x=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("float"),
large_abs_safety_factor=8,
small_abs_safety_factor=8,
safety_factor_scale="log",
),
min_val=st.one_of(
st.floats(min_value=-10.0, max_value=-1.0),
),
max_val=st.one_of(
st.floats(min_value=1.0, max_value=10.0),
),
)
def test_hardtanh(
*,
dtype_and_x,
min_val,
max_val,
test_flags,
backend_fw,
fn_name,
on_device,
):
dtype, x = dtype_and_x
helpers.test_function(
input_dtypes=dtype,
backend_to_test=backend_fw,
test_flags=test_flags,
fn_name=fn_name,
on_device=on_device,
x=x[0],
min_val=min_val,
max_val=max_val,
)
# logit
@handle_test(
fn_tree="functional.ivy.experimental.logit",
dtype_and_x=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("float_and_complex"),
large_abs_safety_factor=8,
small_abs_safety_factor=8,
safety_factor_scale="log",
),
)
def test_logit(*, dtype_and_x, test_flags, backend_fw, fn_name, on_device):
dtype, x = dtype_and_x
helpers.test_function(
input_dtypes=dtype,
backend_to_test=backend_fw,
test_flags=test_flags,
fn_name=fn_name,
on_device=on_device,
x=x[0],
)
# logsigmoid
@handle_test(
fn_tree="functional.ivy.experimental.logsigmoid",
dtype_and_x=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("valid"),
safety_factor_scale="log",
large_abs_safety_factor=120,
),
test_with_out=st.just(False),
)
def test_logsigmoid(*, dtype_and_x, test_flags, backend_fw, fn_name, on_device):
input_dtype, x = dtype_and_x
test_flags.num_positional_args = len(x)
helpers.test_function(
input_dtypes=input_dtype,
test_flags=test_flags,
backend_to_test=backend_fw,
fn_name=fn_name,
on_device=on_device,
input=x[0],
)
# prelu
@handle_test(
fn_tree="prelu",
dtype_and_x=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("valid"),
shape=st.shared(helpers.get_shape(), key="prelu"),
large_abs_safety_factor=8,
small_abs_safety_factor=8,
safety_factor_scale="log",
),
slope=helpers.array_values(
dtype=helpers.get_dtypes("float"),
shape=st.shared(helpers.get_shape(), key="prelu"),
),
)
def test_prelu(*, dtype_and_x, slope, test_flags, backend_fw, fn_name, on_device):
dtype, x = dtype_and_x
helpers.test_function(
input_dtypes=dtype,
backend_to_test=backend_fw,
test_flags=test_flags,
fn_name=fn_name,
on_device=on_device,
x=x[0],
slope=slope,
)
# relu6
@handle_test(
fn_tree="functional.ivy.experimental.relu6",
dtype_and_x=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("float_and_complex"),
large_abs_safety_factor=2,
small_abs_safety_factor=2,
safety_factor_scale="log",
min_value=1e-15,
),
complex_mode=st.sampled_from(["jax", "split", "magnitude"]),
)
def test_relu6(
*, dtype_and_x, complex_mode, test_flags, backend_fw, fn_name, on_device
):
dtype, x = dtype_and_x
helpers.test_function(
input_dtypes=dtype,
backend_to_test=backend_fw,
test_flags=test_flags,
fn_name=fn_name,
on_device=on_device,
x=x[0],
complex_mode=complex_mode,
)
# scaled_tanh
@handle_test(
fn_tree="functional.ivy.experimental.scaled_tanh",
dtype_and_x=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("valid"),
min_dim_size=1,
min_num_dims=1,
),
alpha=st.floats(min_value=0.1, max_value=5.0),
beta=st.floats(min_value=0.1, max_value=5.0),
ground_truth_backend="paddle",
)
def test_scaled_tanh(
*, dtype_and_x, alpha, beta, test_flags, backend_fw, fn_name, on_device
):
dtype, x = dtype_and_x
helpers.test_function(
input_dtypes=dtype,
backend_to_test=backend_fw,
test_flags=test_flags,
fn_name=fn_name,
on_device=on_device,
rtol_=1e-5,
atol_=1e-5,
x=x[0],
alpha=alpha,
beta=beta,
)
# selu
@handle_test(
fn_tree="functional.ivy.experimental.selu",
dtype_and_input=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("valid"),
safety_factor_scale="log",
small_abs_safety_factor=20,
),
test_with_out=st.just(False),
)
def test_selu(*, dtype_and_input, test_flags, backend_fw, fn_name, on_device):
input_dtype, input = dtype_and_input
test_flags.num_positional_args = len(input)
helpers.test_function(
input_dtypes=input_dtype,
backend_to_test=backend_fw,
test_flags=test_flags,
fn_name=fn_name,
on_device=on_device,
atol_=1e-2,
x=input[0],
)
# silu
@handle_test(
fn_tree="functional.ivy.experimental.silu",
dtype_and_x=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("valid"),
large_abs_safety_factor=8,
small_abs_safety_factor=8,
safety_factor_scale="log",
),
)
def test_silu(*, dtype_and_x, test_flags, backend_fw, fn_name, on_device):
dtype, x = dtype_and_x
helpers.test_function(
input_dtypes=dtype,
backend_to_test=backend_fw,
test_flags=test_flags,
fn_name=fn_name,
on_device=on_device,
rtol_=1e-02,
atol_=1e-02,
x=x[0],
)
# softshrink
@handle_test(
fn_tree="functional.ivy.experimental.softshrink",
dtype_and_x=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("valid"),
large_abs_safety_factor=8,
small_abs_safety_factor=8,
safety_factor_scale="log",
),
threshold=st.one_of(
st.floats(min_value=0.0, max_value=1e30),
),
)
def test_softshrink(
*, dtype_and_x, threshold, test_flags, backend_fw, fn_name, on_device
):
dtype, x = dtype_and_x
helpers.test_function(
input_dtypes=dtype,
backend_to_test=backend_fw,
test_flags=test_flags,
fn_name=fn_name,
on_device=on_device,
x=x[0],
lambd=threshold,
)
# tanhshrink
@handle_test(
fn_tree="functional.ivy.experimental.tanhshrink",
dtype_and_x=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("float"),
large_abs_safety_factor=8,
small_abs_safety_factor=8,
safety_factor_scale="log",
),
)
def test_tanhshrink(*, dtype_and_x, test_flags, backend_fw, fn_name, on_device):
dtype, x = dtype_and_x
helpers.test_function(
input_dtypes=dtype,
backend_to_test=backend_fw,
test_flags=test_flags,
fn_name=fn_name,
on_device=on_device,
rtol_=1e-02,
atol_=1e-02,
x=x[0],
)
# threshold
@handle_test(
fn_tree="functional.ivy.experimental.threshold",
dtype_and_x=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("valid"),
large_abs_safety_factor=8,
small_abs_safety_factor=8,
safety_factor_scale="log",
),
threshold=st.one_of(
st.floats(min_value=-1e30, max_value=1e30),
),
value=st.one_of(
st.floats(min_value=-1e30, max_value=1e30),
),
)
def test_threshold(
*, dtype_and_x, threshold, value, test_flags, backend_fw, fn_name, on_device
):
dtype, x = dtype_and_x
helpers.test_function(
input_dtypes=dtype,
backend_to_test=backend_fw,
test_flags=test_flags,
fn_name=fn_name,
on_device=on_device,
x=x[0],
threshold=threshold,
value=value,
)
# thresholded_relu
@handle_test(
fn_tree="functional.ivy.experimental.thresholded_relu",
dtype_and_x=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("float"),
large_abs_safety_factor=8,
small_abs_safety_factor=8,
safety_factor_scale="log",
),
threshold=st.one_of(
st.floats(min_value=-0.10, max_value=10.0),
),
)
def test_thresholded_relu(
*, dtype_and_x, threshold, test_flags, backend_fw, fn_name, on_device
):
dtype, x = dtype_and_x
helpers.test_function(
input_dtypes=dtype,
backend_to_test=backend_fw,
test_flags=test_flags,
fn_name=fn_name,
on_device=on_device,
x=x[0],
threshold=threshold,
)
| ivy/ivy_tests/test_ivy/test_functional/test_experimental/test_nn/test_activations.py/0 | {
"file_path": "ivy/ivy_tests/test_ivy/test_functional/test_experimental/test_nn/test_activations.py",
"repo_id": "ivy",
"token_count": 5906
} | 69 |
import sys
import os
import contextlib
import pytest
import ivy
@pytest.mark.parametrize("trace_mode", ["full", "ivy", "frontend"])
def test_get_trace_mode(trace_mode, backend_fw):
ivy.set_backend(backend_fw)
ivy.set_exception_trace_mode(trace_mode)
ivy.set_exception_trace_mode("ivy")
ivy.utils.assertions.check_equal(ivy.exception_trace_mode, "ivy", as_array=False)
ivy.previous_backend()
@pytest.mark.parametrize("trace_mode", ["full", "ivy", "frontend"])
def test_set_trace_mode(trace_mode, backend_fw):
ivy.set_backend(backend_fw)
ivy.set_exception_trace_mode(trace_mode)
ivy.utils.assertions.check_equal(
ivy.exception_trace_mode, trace_mode, as_array=False
)
ivy.previous_backend()
@pytest.mark.parametrize("trace_mode", ["full", "ivy", "frontend"])
@pytest.mark.parametrize("show_func_wrapper", [True, False])
def test_trace_modes(backend_fw, trace_mode, show_func_wrapper):
ivy.set_backend(backend_fw)
filename = "excep_out.txt"
orig_stdout = sys.stdout
with open(filename, "w") as f:
sys.stdout = f
ivy.set_exception_trace_mode(trace_mode)
ivy.set_show_func_wrapper_trace_mode(show_func_wrapper)
x = ivy.array([])
y = ivy.array([1.0, 3.0, 4.0])
lines = ""
try:
ivy.divide(x, y)
except Exception as e:
print(e)
sys.stdout = orig_stdout
with open(filename) as f:
lines += f.read()
if trace_mode == "full" and not show_func_wrapper:
assert "/func_wrapper.py" not in lines
assert "/ivy/functional/backends" in lines
if backend_fw.current_backend_str() not in ["torch", "numpy"]:
assert "/dist-packages" in lines
if trace_mode == "full" and show_func_wrapper:
assert "/func_wrapper.py" in lines
assert "/ivy/functional/backends" in lines
if backend_fw.current_backend_str() not in ["torch", "numpy"]:
assert "/dist-packages" in lines
if trace_mode in ["ivy", "frontend"]:
if not show_func_wrapper:
assert "/func_wrapper.py" not in lines
assert "/dist-packages" not in lines
if show_func_wrapper:
if trace_mode == "frontend":
assert "/ivy/functional/backends" not in lines
else:
assert "/func_wrapper.py" in lines
assert "/dist-packages" not in lines
with contextlib.suppress(FileNotFoundError):
os.remove(filename)
ivy.previous_backend()
@pytest.mark.parametrize("trace_mode", ["full", "ivy", "frontend"])
def test_unset_trace_mode(trace_mode, backend_fw):
ivy.set_backend(backend_fw)
ivy.set_exception_trace_mode(trace_mode)
ivy.set_exception_trace_mode("ivy")
ivy.utils.assertions.check_equal(ivy.exception_trace_mode, "ivy", as_array=False)
ivy.unset_exception_trace_mode()
ivy.utils.assertions.check_equal(
ivy.exception_trace_mode, trace_mode, as_array=False
)
ivy.previous_backend()
| ivy/ivy_tests/test_ivy/test_misc/test_exceptions.py/0 | {
"file_path": "ivy/ivy_tests/test_ivy/test_misc/test_exceptions.py",
"repo_id": "ivy",
"token_count": 1359
} | 70 |
"""Collection of tests for module converters."""
# global
import pytest
from types import SimpleNamespace
from typing import Sequence
# local
import ivy
try:
import torch
import torch.nn as nn
except ImportError:
torch = SimpleNamespace()
torch.tanh = SimpleNamespace
nn = SimpleNamespace()
nn.Module = SimpleNamespace
nn.Linear = SimpleNamespace
torch.optim = SimpleNamespace()
torch.optim.SGD = SimpleNamespace
nn.L1Loss = SimpleNamespace
try:
import jax
from jax import value_and_grad
import haiku as hk
import jax.numpy as jnp
except ImportError:
jax = SimpleNamespace()
value_and_grad = SimpleNamespace
hk = SimpleNamespace()
hk.Module = SimpleNamespace
hk.Linear = SimpleNamespace
hk.transform = SimpleNamespace()
hk.transform.init = SimpleNamespace
jnp = SimpleNamespace()
jnp.expand_dims = SimpleNamespace
jnp.tanh = SimpleNamespace
jnp.mean = SimpleNamespace
jax.random = SimpleNamespace()
jax.random.PRNGKey = SimpleNamespace
jax.tree_map = SimpleNamespace
try:
import flax
import jaxlib
except ImportError:
flax = SimpleNamespace()
flax.linen = SimpleNamespace()
flax.linen.Module = SimpleNamespace
flax.linen.Dense = SimpleNamespace
jaxlib = SimpleNamespace()
jaxlib.xla_extension = SimpleNamespace()
jaxlib.xla_extension.Device = SimpleNamespace
try:
import tensorflow as tf
except ImportError:
tf = SimpleNamespace()
tf.expand_dims = SimpleNamespace
tf.tanh = SimpleNamespace
tf.keras = SimpleNamespace()
tf.keras.Model = SimpleNamespace
tf.keras.layers = SimpleNamespace()
tf.keras.layers.Dense = SimpleNamespace
tf.keras.optimizers = SimpleNamespace()
tf.keras.optimizers.SGD = SimpleNamespace()
tf.keras.optimizers.SGD.apply_gradients = SimpleNamespace
tf.keras.losses = SimpleNamespace()
tf.keras.losses.MeanAbsoluteError = SimpleNamespace
tf.GradientTape = SimpleNamespace()
tf.GradientTape.tape = SimpleNamespace
tf.GradientTape.watch = SimpleNamespace
try:
import paddle
except ImportError:
paddle = SimpleNamespace()
paddle.nn.Layer = SimpleNamespace
paddle.nn.Linear = SimpleNamespace
paddle.nn.functional.tanh = SimpleNamespace
paddle.optimizer = SimpleNamespace()
paddle.optimizer.SGD = SimpleNamespace
paddle.nn.L1Loss = SimpleNamespace
FROM_CONVERTERS = {
"torch": "from_torch_module",
"jax": {
"haiku": "from_haiku_module",
"flax": "from_flax_module",
},
"tensorflow": "from_keras_module",
"paddle": "from_paddle_module",
}
class TensorflowLinear(tf.keras.Model):
def __init__(self, out_size):
super().__init__()
self._linear = tf.keras.layers.Dense(out_size)
def build(self, input_shape):
super().build(input_shape)
def call(self, x):
return self._linear(x)
class TensorflowModule(tf.keras.Model):
def __init__(self, in_size, out_size, device=None, hidden_size=64):
super().__init__()
self._linear0 = TensorflowLinear(hidden_size)
self._linear1 = TensorflowLinear(hidden_size)
self._linear2 = TensorflowLinear(out_size)
def call(self, x):
x = tf.expand_dims(x, 0)
x = tf.tanh(self._linear0(x))
x = tf.tanh(self._linear1(x))
return tf.tanh(self._linear2(x))[0]
class TorchLinearModule(nn.Module):
def __init__(self, in_size, out_size):
super().__init__()
self._linear = nn.Linear(in_size, out_size)
def forward(self, x):
return self._linear(x)
class TorchModule(nn.Module):
def __init__(self, in_size, out_size, device=None, hidden_size=64):
super().__init__()
self._linear0 = TorchLinearModule(in_size, hidden_size)
self._linear1 = TorchLinearModule(hidden_size, hidden_size)
self._linear2 = TorchLinearModule(hidden_size, out_size)
def forward(self, x):
x = x.unsqueeze(0)
x = torch.tanh(self._linear0(x))
x = torch.tanh(self._linear1(x))
return torch.tanh(self._linear2(x))[0]
class HaikuLinear(hk.Module):
def __init__(self, out_size):
super().__init__()
self._linear = hk.Linear(out_size)
def __call__(self, x):
return self._linear(x)
class HaikuModule(hk.Module):
def __init__(self, in_size, out_size, device=None, hidden_size=64):
super().__init__()
self._linear0 = HaikuLinear(hidden_size)
self._linear1 = HaikuLinear(hidden_size)
self._linear2 = HaikuLinear(out_size)
def __call__(self, x):
x = jnp.expand_dims(x, 0)
x = jnp.tanh(self._linear0(x))
x = jnp.tanh(self._linear1(x))
return jnp.tanh(self._linear2(x))[0]
class FlaxLinear(flax.linen.Module):
out_size: Sequence[int]
def setup(self):
self._linear = flax.linen.Dense(self.out_size)
def __call__(self, x):
return self._linear(x)
class FlaxModule(flax.linen.Module):
in_size: Sequence[int]
out_size: Sequence[int]
device: jaxlib.xla_extension.Device = None
hidden_size: Sequence[int] = 64
def setup(self):
self._linear0 = FlaxLinear(out_size=self.hidden_size)
self._linear1 = FlaxLinear(out_size=self.hidden_size)
self._linear2 = FlaxLinear(out_size=self.out_size)
def __call__(self, x):
x = jnp.expand_dims(x, 0)
x = jnp.tanh(self._linear0(x))
x = jnp.tanh(self._linear1(x))
return jnp.tanh(self._linear2(x))[0]
class PaddleLinearModule(paddle.nn.Layer):
def __init__(self, in_size, out_size):
super().__init__()
self._linear = paddle.nn.Linear(in_size, out_size)
def forward(self, x):
return self._linear(x)
class PaddleModule(paddle.nn.Layer):
def __init__(self, in_size, out_size, device=None, hidden_size=64):
super().__init__()
self._linear0 = PaddleLinearModule(in_size, hidden_size)
self._linear1 = PaddleLinearModule(hidden_size, hidden_size)
self._linear2 = PaddleLinearModule(hidden_size, out_size)
def forward(self, x):
x = x.unsqueeze(0)
x = paddle.nn.functional.tanh(self._linear0(x))
x = paddle.nn.functional.tanh(self._linear1(x))
return paddle.nn.functional.tanh(self._linear2(x))[0]
def get_converter(ivy_backend, converter):
return getattr(ivy_backend.Module, converter)
@pytest.mark.parametrize("bs_ic_oc", [([1, 2], 4, 5)])
@pytest.mark.parametrize("from_class_and_args", [True, False])
def test_from_backend_module(bs_ic_oc, from_class_and_args, backend_fw):
# smoke test
if backend_fw in ["numpy", "jax"]:
# Converters not implemented in numpy
pytest.skip()
batch_shape, input_channels, output_channels = bs_ic_oc
# using ivy_backend.utils.backend.ContextManager instead of update_backend,
# because with_backend doesn't work here
with ivy.utils.backend.ContextManager(backend_fw) as ivy_backend:
x = ivy_backend.astype(
ivy_backend.linspace(
ivy_backend.zeros(batch_shape),
ivy_backend.ones(batch_shape),
input_channels,
),
"float32",
)
native_module_class = NATIVE_MODULES[ivy_backend.current_backend_str()]
module_converter = get_converter(
ivy_backend, FROM_CONVERTERS[ivy_backend.current_backend_str()]
)
if from_class_and_args:
ivy_module = module_converter(
native_module_class,
instance_args=[x],
constructor_kwargs={
"in_size": input_channels,
"out_size": output_channels,
},
)
else:
if ivy_backend.current_backend_str() == "tensorflow":
native_module = native_module_class(
in_size=input_channels, out_size=output_channels
)
native_module.build((input_channels,))
else:
native_module = native_module_class(
in_size=input_channels, out_size=output_channels
)
fw_kwargs = {}
ivy_module = module_converter(native_module, **fw_kwargs)
def loss_fn(v_=None):
out = ivy_module(x, v=v_)
return ivy_backend.mean(out)
# train
loss_tm1 = 1e12
loss = None
grads = None
loss_fn() # for on-call mode
for i in range(10):
loss, grads = ivy_backend.execute_with_gradients(loss_fn, ivy_module.v)
w = ivy_backend.gradient_descent_update(ivy_module.v, grads, 1e-3)
ivy_backend.inplace_update(ivy_module.v, w)
assert loss <= loss_tm1
loss_tm1 = loss
# type test
assert ivy_backend.is_array(loss)
assert isinstance(grads, ivy_backend.Container)
# cardinality test
assert loss.shape == ()
# value test
assert (abs(grads).max() > 0).cont_all_true()
@pytest.mark.parametrize("bs_ic_oc", [([1, 2], 4, 5)])
@pytest.mark.parametrize("from_class_and_args", [True, False])
@pytest.mark.parametrize("module_type", ["haiku", "flax"])
def test_from_jax_module(bs_ic_oc, from_class_and_args, module_type, backend_fw):
# smoke test
if backend_fw not in ["jax"]:
# Converters not implemented in numpy
pytest.skip()
batch_shape, input_channels, output_channels = bs_ic_oc
# using ivy_backend.utils.backend.ContextManager instead of update_backend,
# because with_backend doesn't work here
with ivy.utils.backend.ContextManager(backend_fw) as ivy_backend:
x = ivy_backend.astype(
ivy_backend.linspace(
ivy_backend.zeros(batch_shape),
ivy_backend.ones(batch_shape),
input_channels,
),
"float32",
)
native_module_class = NATIVE_MODULES[ivy_backend.current_backend_str()][
module_type
]
module_converter = FROM_CONVERTERS[ivy_backend.current_backend_str()][
module_type
]
module_converter = get_converter(
ivy_backend, FROM_CONVERTERS[ivy_backend.current_backend_str()][module_type]
)
if from_class_and_args:
ivy_module = module_converter(
native_module_class,
instance_args=[x],
constructor_kwargs={
"in_size": input_channels,
"out_size": output_channels,
},
)
else:
if module_type == "haiku":
def forward_fn(*a, **kw):
model = native_module_class(input_channels, output_channels)
return model(ivy_backend.to_native(x))
native_module = hk.transform(forward_fn)
else:
native_module = native_module_class(
in_size=input_channels, out_size=output_channels
)
fw_kwargs = {}
if module_type == "haiku":
fw_kwargs["params_hk"] = native_module.init(0, x)
else:
fw_kwargs["params_fx"] = native_module.init(
jax.random.PRNGKey(0), ivy_backend.to_native(x)
)
ivy_module = module_converter(native_module, **fw_kwargs)
def loss_fn(v_=None):
out = ivy_module(x, v=v_)
return ivy_backend.mean(out)
# train
loss_tm1 = 1e12
loss = None
grads = None
loss_fn() # for on-call mode
for i in range(10):
loss, grads = ivy_backend.execute_with_gradients(loss_fn, ivy_module.v)
ivy_module.v = ivy_backend.gradient_descent_update(
ivy_module.v, grads, 1e-3
)
assert loss < loss_tm1
loss_tm1 = loss
# type test
assert ivy_backend.is_array(loss)
assert isinstance(grads, ivy_backend.Container)
# cardinality test
assert loss.shape == ()
# value test
assert (abs(grads).max() > 0).cont_all_true()
NATIVE_MODULES = {
"torch": TorchModule,
"jax": {
"haiku": HaikuModule,
"flax": FlaxModule,
},
"tensorflow": TensorflowModule,
"paddle": PaddleModule,
}
| ivy/ivy_tests/test_ivy/test_stateful/test_converters.py/0 | {
"file_path": "ivy/ivy_tests/test_ivy/test_stateful/test_converters.py",
"repo_id": "ivy",
"token_count": 5945
} | 71 |
import requests
def get_latest_package_version(package_name):
try:
url = f"https://pypi.org/pypi/{package_name}/json"
response = requests.get(url, timeout=10)
response.raise_for_status()
package_info = response.json()
return package_info["info"]["version"]
except requests.exceptions.RequestException:
print(f"Error: Failed to fetch package information for {package_name}.")
return None
def get_submodule_and_function_name(test_path, is_frontend_test=False):
submodule_test = test_path.split("/")[-1]
submodule, test_function = submodule_test.split("::")
submodule = submodule.replace("test_", "").replace(".py", "")
with open(test_path.split("::")[0]) as test_file:
test_file_content = test_file.read()
test_function_idx = test_file_content.find(f"def {test_function}")
test_function_block_idx = test_file_content[:test_function_idx].rfind("\n\n")
if test_function_block_idx == -1:
return submodule, None
relevant_file_content = test_file_content[
test_function_block_idx:test_function_idx
]
fn_tree_idx = relevant_file_content.rfind('fn_tree="')
# frontend test
if is_frontend_test:
function_name = relevant_file_content[fn_tree_idx + 9 :].split('"')[0]
# instance method test
if fn_tree_idx == -1:
class_tree_idx = test_file_content.find('CLASS_TREE = "')
method_name_idx = relevant_file_content.rfind('method_name="')
if class_tree_idx == -1 or method_name_idx == -1:
return submodule, None
class_tree = test_file_content[class_tree_idx + 14 :].split('"')[0]
class_name = ".".join(class_tree.split(".")[3:])
method_name = relevant_file_content[method_name_idx + 13 :].split('"')[
0
]
function_name = f"{class_name}.{method_name}"
# ivy test
else:
function_name = test_function[5:]
# instance method test
if fn_tree_idx == -1:
method_name_idx = relevant_file_content.rfind('method_tree="')
if method_name_idx != -1:
method_name = relevant_file_content[method_name_idx + 13 :].split(
'"'
)[0]
function_name = f"ivy.{method_name}"
else:
return submodule, None
return submodule, function_name
| ivy/scripts/run_tests/helpers.py/0 | {
"file_path": "ivy/scripts/run_tests/helpers.py",
"repo_id": "ivy",
"token_count": 1267
} | 72 |
import sys
from pymongo import MongoClient
from get_all_tests import get_all_tests
module_map = {
"core": "test_functional/test_core",
"exp_core": "test_functional/test_experimental/test_core",
"nn": "test_functional/test_experimental/test_nn",
"exp_nn": "test_experimental/test_nn",
"stateful": "test_stateful",
"torch": "test_frontends/test_torch",
"jax": "test_frontends/test_jax",
"tensorflow": "test_frontends/test_tensorflow",
"numpy": "test_frontends/test_numpy",
"misc": "test_misc",
"paddle": "test_frontends/test_paddle",
"scipy": "test_frontends/test_scipy",
"torchvision": "test_frontends/test_torchvision",
}
def keys_to_delete_from_db(all_tests, module, data, current_key=""):
"""Recursively navigate and identify keys not in the list."""
keys_for_deletion = []
for key, value in data.items():
new_key = f"{current_key}.{key}" if current_key else key
# If this is a dictionary, recurse deeper
if isinstance(value, dict):
keys_for_deletion.extend(
keys_to_delete_from_db(all_tests, module, value, new_key)
)
elif key != "_id":
components = new_key.split(".")
submodule = components[0]
function = components[-2]
test = f"{module}/{submodule}::{function}"
if test not in all_tests:
keys_for_deletion.append(".".join(components[:-1]))
return keys_for_deletion
submodules = (
"test_paddle",
"test_tensorflow",
"test_torch",
"test_jax",
"test_numpy",
"test_functional",
"test_experimental",
"test_stateful",
"test_misc",
"test_scipy",
"test_pandas",
"test_mindspore",
"test_onnx",
"test_sklearn",
"test_xgboost",
"test_torchvision",
)
db_dict = {
"test_functional/test_core": ["core", 10],
"test_experimental/test_core": ["exp_core", 11],
"test_functional/test_nn": ["nn", 12],
"test_experimental/test_nn": ["exp_nn", 13],
"test_stateful": ["stateful", 14],
"test_torch": ["torch", 15],
"test_jax": ["jax", 16],
"test_tensorflow": ["tensorflow", 17],
"test_numpy": ["numpy", 18],
"test_misc": ["misc", 19],
"test_paddle": ["paddle", 20],
"test_scipy": ["scipy", 21],
"test_pandas": ["pandas", 22],
"test_mindspore": ["mindspore", 23],
"test_onnx": ["onnx", 24],
"test_sklearn": ["sklearn", 25],
"test_xgboost": ["xgboost", 26],
"test_torchvision": ["torchvision", 27],
}
def get_submodule(test_path):
test_path = test_path.split("/")
for name in submodules:
if name in test_path:
if name == "test_functional":
if test_path[3] == "test_experimental":
coll = db_dict[f"test_experimental/{test_path[4]}"]
else:
coll = db_dict[f"test_functional/{test_path[-2]}"]
else:
coll = db_dict[name]
break
submod_test = test_path[-1]
submod, test_fn = submod_test.split("::")
submod = submod.replace("test_", "").replace(".py", "")
return coll, submod, test_fn
def process_test(test):
coll, submod, test_fn = get_submodule(test)
return f"{coll[0]}/{submod}::{test_fn}"
def remove_empty_objects(document, key_prefix=""):
# Base case: if the document is not a dictionary, return an empty list
if not isinstance(document, dict):
return []
# List to store keys associated with empty objects
empty_keys = []
for key, value in document.items():
# Generate the full key path
full_key = f"{key_prefix}.{key}" if key_prefix else key
# If the value is a dictionary, recursively check for empty objects
if isinstance(value, dict):
# If the dictionary is empty, store its key
if not value:
empty_keys.append(full_key)
else:
empty_keys.extend(remove_empty_objects(value, full_key))
return empty_keys
def main():
all_tests = get_all_tests()
all_tests = {process_test(test.split(",")[0].strip()) for test in all_tests}
mongo_key = sys.argv[1]
cluster = MongoClient(
f"mongodb+srv://deep-ivy:{mongo_key}@cluster0.qdvf8q3.mongodb.net/?retryWrites=true&w=majority" # noqa
)
db = cluster["Ivy_tests_multi_gpu"]
for collection_name in db.list_collection_names():
collection = db[collection_name]
for document in collection.find({}):
undesired_keys = keys_to_delete_from_db(
all_tests, collection_name, document
)
for key in undesired_keys:
collection.update_one({"_id": document["_id"]}, {"$unset": {key: 1}})
for collection_name in db.list_collection_names():
collection = db[collection_name]
break_flag = False
while True:
for document in collection.find({}):
keys_to_remove = remove_empty_objects(document)
if keys_to_remove:
update_operation = {"$unset": {key: 1 for key in keys_to_remove}}
collection.update_one({"_id": document["_id"]}, update_operation)
else:
break_flag = True
break
if break_flag:
break_flag = False
break
if __name__ == "__main__":
main()
| ivy/scripts/setup_tests/synchronize_db.py/0 | {
"file_path": "ivy/scripts/setup_tests/synchronize_db.py",
"repo_id": "ivy",
"token_count": 2511
} | 73 |
#!/bin/bash
git submodule update --init --recursive
python3 -m pip install --user -e .
python3 -m pip install pre-commit
git config --global --add safe.directory /workspaces/ivy
( cd /workspaces/ivy/ && pre-commit install)
| ivy/.devcontainer/post_create_commands.sh/0 | {
"file_path": "ivy/.devcontainer/post_create_commands.sh",
"repo_id": "ivy",
"token_count": 75
} | 0 |
cff-version: 1.2.0
title: >-
Ivy: Templated deep learning for inter-framework
portability
message: >-
If you are using Ivy, we would really appreciate it if you
cite it in your work!
authors:
- given-names: Daniel
family-names: Lenton
- given-names: Fabio
family-names: Pardo
- given-names: Fabian
family-names: Falck
- given-names: Stephen
family-names: James
- given-names: Ronald
family-names: Clark
identifiers:
- type: doi
value: 10.48550/arXiv.2102.02886
description: 'arXiv preprint '
repository-code: 'https://github.com/unifyai/ivy'
url: 'https://unify.ai/'
repository: 'https://github.com/unifyai/'
abstract: 'We introduce Ivy, a templated Deep Learning (DL) framework which abstracts existing DL frameworks. Ivy unifies the core functions of these frameworks to exhibit consistent call signatures, syntax and input-output behaviour. New high-level framework-agnostic functions and classes, which are usable alongside framework-specific code, can then be implemented as compositions of the unified low-level Ivy functions. Ivy currently supports TensorFlow, PyTorch, MXNet, Jax and NumPy. We also release four pure-Ivy libraries for mechanics, 3D vision, robotics, and differentiable environments. Through our evaluations, we show that Ivy can significantly reduce lines of code with a runtime overhead of less than 1% in most cases. We welcome developers to join the Ivy community by writing their own functions, layers and libraries in Ivy, maximizing their audience and helping to accelerate DL research through inter-framework codebases.'
license: Apache-2.0
preferred-citation:
type: article
authors:
- given-names: Daniel
family-names: Lenton
- given-names: Fabio
family-names: Pardo
- given-names: Fabian
family-names: Falck
- given-names: Stephen
family-names: James
- given-names: Ronald
family-names: Clark
doi: 10.48550/arXiv.2102.02886
title: "Ivy: Templated deep learning for inter-framework portability"
| ivy/CITATION.cff/0 | {
"file_path": "ivy/CITATION.cff",
"repo_id": "ivy",
"token_count": 588
} | 1 |
docker build --progress=plain --no-cache -t unifyai/ivy:latest-gpu -f DockerfileGPU ..
| ivy/docker/build_gpu_dockerfile.sh/0 | {
"file_path": "ivy/docker/build_gpu_dockerfile.sh",
"repo_id": "ivy",
"token_count": 30
} | 2 |
{% extends "top_level_toc_recursive.rst" %}
{% set ivy_module_map = {
"ivy.stateful": "Framework classes",
"ivy.nested_array": "Nested array",
"ivy.utils": "Utils",
"ivy_tests.test_ivy.helpers": "Testing",
} %}
{% block name %}{{ivy_module_map[fullname] | escape | underline}}{% endblock %}
| ivy/docs/_templates/top_ivy_toc.rst/0 | {
"file_path": "ivy/docs/_templates/top_ivy_toc.rst",
"repo_id": "ivy",
"token_count": 136
} | 3 |
Building the Docs Pipeline
==========================
.. _Sphinx: http://sphinx-doc.org/
.. _Sphinx configuration file: https://www.sphinx-doc.org/en/master/usage/configuration.html
.. _autosummary: https://www.sphinx-doc.org/en/master/usage/extensions/autosummary.html
.. _doc-builder repository: https://github.com/unifyai/doc-builder
.. warning::
Be aware that the doc-builder was developed originally for Linux, although, in theory, you can run
it on any platform (supporting either docker or windows), it's only tested it on
Linux. If you find any windows related issues, feel free to open an issue for that to review it.
.. note::
Recommendation:
You can use the convenience script if you build the docs regularly,
as it will not re-download the dependencies.
If you have a slow internet connection, you can use GitHub Codespaces since it will help you to build the
docs faster since our script downloads large dependency files.
To build our docs, we use `Sphinx`_. Sphinx is an extendable documentation generator
for Python. As our building pipeline is complex, we heavily customize Sphinx using
custom and third party extensions. As well as having a convenience script to build
the docs.
How the doc-builder is being run
--------------------------------
There are 2 ways to build the docs:
1. Through a convenience script, which is useful for local development.
2. Through a Docker image, which is the recommended way.
We will go through how they work in the following sections.
The convenience script
~~~~~~~~~~~~~~~~~~~~~~
``make_docs_without_docker.sh`` is a convenience script to build the docs, which can be
found in the `doc-builder repository`_. It takes one argument, the path to a project to
document. The project should have the following characteristics:
1. It should have a ``requirements.txt``, or alternatively a ``requirements`` folder,
which includes a ``requirements.txt`` and an optional ``optional.txt`` file.
2. It can have an optional ``optional.txt`` file, if not the script will
simply ignore it.
3. It should have a ``docs`` folder, which contains an ``index.rst`` file. This file
is the root of the documentation.
4. It can contain an optional ``docs/prebuild.sh`` file, which will be executed before
the docs are built. This is useful if you need to install some dependencies for the
docs to build.
5. It can contain an optional ``docs/partial_conf.py`` which is a partial `Sphinx
configuration file`_.
This file will be imported with the default ``conf.py`` file located in the
``doc-builder`` repo.
Running the script:
.. code-block:: bash
./make_docs_without_docker.sh /path/to/project
will result in the creation of documentation for the project in the directory
``docs/build``.
Options
"""""""
-h, --help Show this help
-C, --no-cleanup Disable the backup/cleanup procedure
-g, --git-add Stage changed files before generating the docs
-s, --skip-dependencies-install Skip installing dependencies using pip
-j, --jobs N Build in parallel with N processes where possible
(special value ``auto`` will set N to cpu-count)
-D setting Override a setting in ``conf.py``
The Docker image
~~~~~~~~~~~~~~~~
The Docker image `unifyai/doc-builder <https://hub.docker.com/r/unifyai/doc-builder>`_
works as a wrapper around the ``make_docs_without_docker.sh`` script. It runs the script
on the ``/project`` directory, located in the container `as shown here
<https://github.com/unifyai/doc-builder/blob/main/Dockerfile#L21>`_:
.. code-block:: bash
./make_docs_without_docker.sh /project
To build the docs through docker you use this command:
.. code-block:: bash
docker run -v /path/to/project:/project unifyai/doc-builder
You can also add options described in the :ref:`overview/deep_dive/building_the_docs_pipeline:The convenience script` section.
.. code-block:: bash
docker run -v /path/to/project:/project unifyai/doc-builder --no-cleanup
How Ivy's docs is structured
-----------------------------
Looking at `Ivy docs <https://github.com/unifyai/ivy/tree/main/docs>`_, we can see
that it is structured like this:
.. code-block:: bash
docs
├── index.rst
├── partial_conf.py
├── prebuild.sh
├── overview
│ ├── background.rst
│ ├── ...
│ └── ...
└── ...
Let's go through each of these files and folders.
``index.rst``
~~~~~~~~~~~~~
This is the root of the documentation. It is the first file that Sphinx will read when
building the docs. It is also the file that will be displayed when you open the docs
in a browser.
Here is a segment of the file:
.. code-block:: rst
.. include:: ../README.rst
.. toctree::
:hidden:
:maxdepth: -1
:caption: Overview
overview/background.rst
overview/design.rst
overview/related_work.rst
overview/extensions.rst
overview/contributing.rst
overview/deep_dive.rst
overview/faq.rst
overview/glossary.rst
.. autosummary::
:toctree: docs/functional
:template: top_functional_toc.rst
:caption: API Reference
:recursive:
:hide-table:
ivy.functional.ivy
You can see here different reStructuredText directives. The first one is ``include``,
which simply includes the main README file of the project, this is a good place if you
want to make the rendered docs look different from the README, or simply include it as
is.
The second directive is ``toctree``, which is used to create a table of contents. The
``:hidden:`` option hides the table of contents from the rendered docs, only keeping it
on the left side of the docs, not inline in the page itself. The ``:maxdepth:`` option
is used to specify how deep the table of contents should go. The ``:caption:`` option
is used to specify the title of the table of contents. The rest of the arguments are
the files that should be included in the table of contents. Which in recursively points
to every page in this documentation, for example this page is included in the
``toctree`` of ``overview/deep_dive.rst``, which is included in the ``toctree`` of
``index.rst``. You can read more about the ``toctree`` directive in `sphinx docs
<https://www.sphinx-doc.org/en/master/usage/restructuredtext/directives.html#directive-toctree>`_, from
now on we'll only explain the directives that are custom to Ivy's doc-builder.
The last directive is ``autosummary``, which is used to automatically generate a table
of contents for a module, as well as the documentation itself automatically by
discovering the docstrings of the module. This is a custom directive, built on the original
`autosummary`_
extension. We will explain in detail how did we change it, in :ref:`overview/deep_dive/building_the_docs_pipeline:Custom Extensions`.
``partial_conf.py``
~~~~~~~~~~~~~~~~~~~
This is a partial `Sphinx configuration file`_. Which is being imported in the
`conf.py <https://github.com/unifyai/doc-builder/blob/main/docs/conf.py#L150>`_,
it's used to customize options that are specific to the project being documented.
While importing common configurations such as the theme, the extensions, etc in the
original ``conf.py``.
This is a part of ``partial_conf.py``:
.. code-block:: python
ivy_toctree_caption_map = {
"ivy.functional.ivy": "Functions",
"ivy.stateful": "Framework classes",
"ivy.nested_array": "Nested array",
"ivy.utils": "Utils",
"ivy_tests.test_ivy.helpers": "Testing",
}
Here we are overriding the ``ivy_toctree_caption_map`` configuration, which is used to
customize the title of the table of contents for each module.
``ivy_toctree_caption_map`` is one of the configuration options we have in our
``custom_autosummary`` extension, which will be covered extensively in
:ref:`overview/deep_dive/building_the_docs_pipeline:Custom Extensions`.
``prebuild.sh``
~~~~~~~~~~~~~~~
This is an optional file, which is executed before the docs are built. This is useful
if you need to install some dependencies for the docs to build. In Ivy's case, we
install ``torch`` then ``torch-scatter`` sequentially to avoid a bug in
``torch-scatter``'s setup. And if we want to make any changes to the docker container
before building the docs.
Custom Extensions
-----------------
As of writing this documentation, Ivy's doc-builder is using 4 custom extensions:
#. ``custom_autosummary``
#. ``discussion_linker``
#. ``skippable_function``
#. ``ivy_data``
``custom_autosummary``
~~~~~~~~~~~~~~~~~~~~~~
This extension is a modified version of the original `autosummary`_, which is used to
discover and automatically document the docstrings of a module. This is done by
generating "stub" rst files for each module listed in the ``autosummary`` directive,
you can add a template for these stub files using the ``:template:`` option. Which can
in turn include the ``autosummary`` directive again, recursing on the whole module.
Unfortunately, the original ``autosummary`` extension is very limited, forcing you to
have a table of contents for each modules.
We'll go through each option or configuration value added to the original ``autosummary``
``:hide-table:``
""""""""""""""""
As the name suggests, the original behavior of ``autosummary`` is to generate a table
of contents for each module. And it generates stub files only if the ``:toctree:`` option is
specified. As we only need the ``toctree`` this option hides the table of contents, but
it requires the ``:toctree:`` option to be specified.
``discussion_linker``
~~~~~~~~~~~~~~~~~~~~~
Discussion linker is a simple extension that adds a link to our discord server, as well
as specific discussion boards for each modules.
The directive is included like this:
.. code-block:: rst
.. discussion-links:: module.foo
First it will look for the ``discussion_channel_map`` configuration, in Ivy it looks like
this:
.. code-block:: python
discussion_channel_map = {
...,
"ivy.functional.ivy.creation": ["1000043690254946374"],
"ivy.functional.ivy.data_type": ["1000043749088436315"],
...,
}
The key is the module name, if it's not found the ``discussion-link`` directive will
render an empty node. The first and only value in the list is the channel id of the
module, it is in a list as we used to have forums as well but they are removed now.
The output string is generated by a series of replaces on template strings, which are
customizable using the config. To understand how it works, let's look at the default
configurations and their values:
- ``discussion_paragraph``: ``"This should have hopefully given you an overview of the
{{submodule}} submodule, if you have any questions, please feel free to reach out on
our [discord]({{discord_link}}) in the [{{submodule}} channel]({{channel_link}})!"``
- ``discord_link``: ``"https://discord.gg/ZVQdvbzNQJ"``
- ``channel_link``: ``"https://discord.com/channels/799879767196958751/{{channel_id}}"``
Here is an example of how it works for ``ivy.functional.ivy.creation``:
1. First we resolve the ``{{submodule}}`` template string, which is the last part of the
module name, in this case it's ``creation``.
The result will be like this:
This should have hopefully given you an overview of the
**creation** submodule, if you have any questions, please feel free to reach out on
our [discord]({{discord_link}}) in the [**creation** channel]({{channel_link}})!
2. Then we resolve the ``{{discord_link}}`` template string.
The result will be like this:
This should have hopefully given you an overview of the
creation submodule, if you have any questions, please feel free to reach out on
our [discord](**https://discord.gg/ZVQdvbzNQJ**) in the [creation channel]({{channel_link}})!
3. Then we resolve the ``{{channel_link}}`` template string.
The result will be like this:
This should have hopefully given you an overview of the
creation submodule, if you have any questions, please feel free to reach out on
our [discord](\https://discord.gg/ZVQdvbzNQJ) in the [creation channel](**https://discord.com/channels/799879767196958751/{{channel_id}}**)!
4. We finally resolve ``{{channel_id}}`` template strings.
The result will be like this:
This should have hopefully given you an overview of the
creation submodule, if you have any questions, please feel free to reach out on
our [discord](\https://discord.gg/ZVQdvbzNQJ) in the [creation channel](\https://discord.com/channels/799879767196958751/**1000043690254946374**)!
5. After that we render the node paragraph as if it's a Markdown text resulting this:
This should have hopefully given you an overview of the
creation submodule, if you have any questions, please feel free to reach out on
our `discord <https://discord.gg/ZVQdvbzNQJ>`_ in the `creation channel
<https://discord.com/channels/799879767196958751/1000043690254946374>`_!
All of the above template strings can be customized using the configuration, so feel free
to change them to your liking.
``skippable_function``
~~~~~~~~~~~~~~~~~~~~~~
This extension provides a custom auto documenter ``autoskippablemethod`` that skip
functions that match values in ``skippable_method_attributes`` configuration.
This is an example of ``skippable_method_attributes`` configuration in
``partial_conf.py``:
.. code-block:: python
skippable_method_attributes = [
{
"__qualname__": "_wrap_function.<locals>.new_function"
}
]
This will remove any function that has ``__qualname__`` attribute equal to
``_wrap_function.<locals>.new_function``.
``ivy_data``
~~~~~~~~~~~~
This is a custom documenter for ``autodoc`` that documents Ivy data attributes that live
in ``ivy.functional.ivy``, it will replace the module to ``ivy.`` instead of
``ivy.functional.ivy.<submodule>``.
It's used instead of simply using ``ivy.<data attribute>`` because data attributes have
no ``__doc__`` attribute, instead docs are discovered by parsing the source code itself.
So for Sphinx to find the required docs, it needs to be supplied the full module name,
then using the ``autoivydata`` directive will replace the module name to ``ivy.``.
Please refer to the `auto documenter guide in sphinx documentation
<https://www.sphinx-doc.org/en/master/development/tutorials/autodoc_ext.html>`_ for more
info.
| ivy/docs/overview/deep_dive/building_the_docs_pipeline.rst/0 | {
"file_path": "ivy/docs/overview/deep_dive/building_the_docs_pipeline.rst",
"repo_id": "ivy",
"token_count": 4494
} | 4 |
Ivy Frontend Tests
==================
.. _`here`: ../design/ivy_as_a_transpiler.rst
.. _`ivy frontends tests thread`: https://discord.com/channels/799879767196958751/1190246804940402738
.. _`test ivy`: https://github.com/unifyai/ivy/tree/db9a22d96efd3820fb289e9997eb41dda6570868/ivy_tests/test_ivy
.. _`test_frontend_function`: https://github.com/unifyai/ivy/blob/591ac37a664ebdf2ca50a5b0751a3a54ee9d5934/ivy_tests/test_ivy/helpers.py#L1047
.. _`discord`: https://discord.gg/sXyFF8tDtm
.. _`Function Wrapping`: function_wrapping.rst
.. _`open task`: ../contributing/open_tasks.rst
.. _`Ivy Tests`: ivy_tests.rst
.. _`Function Testing Helpers`: https://github.com/unifyai/ivy/blob/bf0becd459004ae6cffeb3c38c02c94eab5b7721/ivy_tests/test_ivy/helpers/function_testing.py
.. _`CI Pipeline`: continuous_integration.rst
Introduction
------------
Just like the backend functional API, our frontend functional API has a collection of Ivy tests located in the subfolder `test ivy`_.
In this section of the deep dive we are going to jump into Ivy Frontend Tests!
**Writing Ivy Frontend Tests**
The Ivy tests in this section make use of hypothesis for performing property based testing which is documented in detail in the Ivy Tests section of the Deep Dive.
We assume knowledge of hypothesis data generation strategies and how to implement them for testing.
**Ivy Decorators**
Ivy provides test decorators for frontend tests to make them easier and more maintainable, currently there are two:
* :func:`@handle_frontend_test` a decorator which is used to test frontend functions, for example :func:`np.zeros` and :func:`tensorflow.tan`.
* :func:`@handle_frontend_method` a decorator which is used to test frontend methods and special methods, for example :func:`torch.Tensor.add` and :func:`numpy.ndarray.__add__`.
**Important Helper Functions**
* :func:`helpers.test_frontend_function` helper function that is designed to do the heavy lifting and make testing Ivy Frontends easy!
One of the many `Function Testing Helpers`_.
It is used to test a frontend function for the current backend by comparing the result with the function in the associated framework.
* :func:`helpers.get_dtypes` helper function that returns either a full list of data types or a single data type, we should **always** be using `helpers.get_dtypes` to sample data types.
* :func:`helpers.dtype_and_values` is a convenience function that allows you to generate arrays of any dimension and their associated data types, returned as :code:`([dtypes], [np.array])`.
* :func:`helpers.get_shape` is a convenience function that allows you to generate an array shape of type :code:`tuple`
* :func:`np_frontend_helpers.where` a generation strategy to generate values for NumPy's optional :code:`where` argument.
* :func:`np_frontend_helpers.test_frontend_function` behaves identical to :func:`helpers.test_frontend_function` but handles NumPy's optional :code:`where` argument
**Useful Notes**
* We should always ensure that our data type generation is complete.
Generating float data types only for a function that accepts all numeric data types is not complete, a complete set would include **all** numeric data types.
* The :func:`test_frontend_function` argument :code:`fn_tree` refers to the frontend function's reference in its native namespace not just the function name.
For example :func:`lax.tan` is needed for some functions in Jax, :func:`nn.functional.relu` is needed for some functions in PyTorch etc.
To get a better understanding for writing frontend tests lets run through some examples!
Frontend Test Examples
-----------------------
Before you begin writing a frontend test, make sure you are placing it in the correct location.
See the :ref:`/overview/contributing/open_tasks:Where to place a frontend function` sub-section of the frontend APIs `open task`_ for more details.
ivy.tan()
^^^^^^^^^
**Jax**
.. code-block:: python
# ivy_tests/test_ivy/test_frontends/test_jax/test_lax/test_operators.py
@handle_frontend_test(
fn_tree="jax.lax.tan",
dtype_and_x=helpers.dtype_and_values(available_dtypes=helpers.get_dtypes("float")),
test_with_out=st.just(False),
)
def test_jax_tan(
*,
dtype_and_x,
on_device,
fn_tree,
backend_fw,
frontend,
test_flags,
):
input_dtype, x = dtype_and_x
helpers.test_frontend_function(
input_dtypes=input_dtype,
backend_to_test=backend_fw,
frontend=frontend,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
x=x[0],
)
* As you can see we generate almost everything we need to test a frontend function within the :code:`@handle_frontend_test` decorator.
* We set :code:`fn_tree` to :code:`jax.lax.tan` which is the path to the function in the Jax namespace.
* We use :code:`helpers.get_dtypes("float")` to generate :code:`available_dtypes`, these are valid :code:`float` data types specifically for Jax.
* We do not generate any values for :code:`as_variable`, :code:`native_array`, :code:`frontend`, :code:`num_positional_args`, :code:`on_device`, these values are generated by :func:`handle_frontend_test`.
* We unpack the :code:`dtype_and_x` to :code:`input_dtype` and :code:`x`.
* We then pass the generated values to :code:`helpers.test_frontend_function` which tests the frontend function.
* :func:`jax.lax.tan` does not support :code:`out` arguments so we set :code:`with_out` to :code:`False`.
* One last important note is that all helper functions are designed to take keyword arguments only.
**NumPy**
.. code-block:: python
# ivy_tests/test_ivy/test_frontends/test_numpy/test_mathematical_functions/test_trigonometric_functions.py
@handle_frontend_test(
fn_tree="numpy.tan",
dtypes_values_casting=np_frontend_helpers.dtypes_values_casting_dtype(
arr_func=[
lambda: helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("float"),
)
],
),
where=np_frontend_helpers.where(),
number_positional_args=np_frontend_helpers.get_num_positional_args_ufunc(
fn_name="tan"
),
)
def test_numpy_tan(
dtypes_values_casting,
where,
frontend,
backend_fw,
test_flags,
fn_tree,
on_device,
):
input_dtypes, x, casting, dtype = dtypes_values_casting
where, input_dtypes, test_flags = np_frontend_helpers.handle_where_and_array_bools(
where=where,
input_dtype=input_dtypes,
test_flags=test_flags,
)
np_frontend_helpers.test_frontend_function(
input_dtypes=input_dtypes,
frontend=frontend,
backend_to_test=backend_fw,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
rtol=1e-02,
atol=1e-02,
x=x[0],
out=None,
where=where,
casting=casting,
order="K",
dtype=dtype,
subok=True,
)
* We set :code:`fn_tree` to :code:`numpy.tan` which is the path to the function in the NumPy namespace.
* Here we use :code:`helpers.get_dtypes("numeric")` to generate :code:`available_dtypes`, these are valid :code:`numeric` data types specifically for NumPy.
* NumPy has an optional argument :code:`where` which is generated using :func:`np_frontend_helpers.where`.
* Using :func:`np_frontend_helpers.handle_where_and_array_bools` we do some processing on the generated :code:`where` value.
* Instead of :func:`helpers.test_frontend_function` we use :func:`np_frontend_helpers.test_frontend_function` which behaves the same but has some extra code to handle the :code:`where` argument.
* :code:`casting`, :code:`order`, :code:`subok` and other are optional arguments for :func:`numpy.tan`.
**TensorFlow**
.. code-block:: python
# ivy_tests/test_ivy/test_frontends/test_tensorflow/test_math.py
@handle_frontend_test(
fn_tree="tensorflow.math.tan",
dtype_and_x=helpers.dtype_and_values(available_dtypes=helpers.get_dtypes("float")),
test_with_out=st.just(False),
)
def test_tensorflow_tan(
*,
dtype_and_x,
frontend,
backend_fw,
test_flags,
fn_tree,
on_device,
):
input_dtype, x = dtype_and_x
helpers.test_frontend_function(
input_dtypes=input_dtype,
frontend=frontend,
backend_to_test=backend_fw,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
x=x[0],
)
* We set :code:`fn_tree` to :code:`tensorflow.math.tan` which is the path to the function in the TensorFlow namespace.
* We use :code:`helpers.get_dtypes("float")` to generate :code:`available_dtypes`, these are valid float data types specifically for the function.
**PyTorch**
.. code-block:: python
# ivy_tests/test_ivy/test_frontends/test_torch/test_pointwise_ops.py
@handle_frontend_test(
fn_tree="torch.tan",
dtype_and_x=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("float"),
),
)
def test_torch_tan(
*,
dtype_and_x,
on_device,
fn_tree,
frontend,
backend_fw,
test_flags,
):
input_dtype, x = dtype_and_x
helpers.test_frontend_function(
input_dtypes=input_dtype,
frontend=frontend,
backend_to_test=backend_fw,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
input=x[0],
)
* We use :code:`helpers.get_dtypes("float")` to generate :code:`available_dtypes`, these are valid float data types specifically for the function.
ivy.full()
^^^^^^^^^^
Here we are going to look at an example of a function that does not consume an :code:`array`.
This is the creation function :func:`full`, which takes an array shape as an argument to create an array filled with elements of a given value.
This function requires us to create extra functions for generating :code:`shape` and :code:`fill value`, these use the :code:`shared` hypothesis strategy.
**Jax**
.. code-block:: python
# ivy_tests/test_ivy/test_frontends/test_jax/test_lax/test_operators.py
@st.composite
def _fill_value(draw):
dtype = draw(helpers.get_dtypes("numeric", full=False, key="dtype"))[0]
with update_backend(test_globals.CURRENT_BACKEND) as ivy_backend:
if ivy_backend.is_uint_dtype(dtype):
return draw(helpers.ints(min_value=0, max_value=5))
elif ivy_backend.is_int_dtype(dtype):
return draw(helpers.ints(min_value=-5, max_value=5))
return draw(helpers.floats(min_value=-5, max_value=5))
@handle_frontend_test(
fn_tree="jax.lax.full",
shape=helpers.get_shape(
allow_none=False,
min_num_dims=1,
max_num_dims=5,
min_dim_size=1,
max_dim_size=10,
),
fill_value=_fill_value(),
dtypes=helpers.get_dtypes("numeric", full=False, key="dtype"),
)
def test_jax_full(
*,
shape,
fill_value,
dtypes,
on_device,
fn_tree,
frontend,
backend_fw,
test_flags,
):
helpers.test_frontend_function(
input_dtypes=dtypes,
frontend=frontend,
backend_to_test=backend_fw,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
shape=shape,
fill_value=fill_value,
dtype=dtypes[0],
)
* The custom function we use is :code:`_fill_value` which generates a :code:`fill_value` to use for the :code:`fill_value` argument but handles the complications of :code:`int` and :code:`uint` types correctly.
* We use the helper function :func:`helpers.get_shape` to generate :code:`shape`.
* We use :code:`helpers.get_dtypes` to generate :code:`dtype`, these are valid numeric data types specifically for Jax.
This is used to specify the data type of the output array.
* :func:`full` does not consume :code:`array`.
**NumPy**
.. code-block:: python
# ivy_tests/test_ivy/test_frontends/test_numpy/creation_routines/test_from_shape_or_value.py
@st.composite
def _input_fill_and_dtype(draw):
dtype = draw(helpers.get_dtypes("float", full=False))
dtype_and_input = draw(helpers.dtype_and_values(dtype=dtype))
with update_backend(test_globals.CURRENT_BACKEND) as ivy_backend:
if ivy_backend.is_uint_dtype(dtype[0]):
fill_values = draw(st.integers(min_value=0, max_value=5))
elif ivy_backend.is_int_dtype(dtype[0]):
fill_values = draw(st.integers(min_value=-5, max_value=5))
else:
fill_values = draw(
helpers.floats(
min_value=-5,
max_value=5,
large_abs_safety_factor=10,
small_abs_safety_factor=10,
safety_factor_scale="log",
)
)
dtype_to_cast = draw(helpers.get_dtypes("float", full=False))
return dtype, dtype_and_input[1], fill_values, dtype_to_cast[0]
# full
@handle_frontend_test(
fn_tree="numpy.full",
shape=helpers.get_shape(
allow_none=False,
min_num_dims=1,
max_num_dims=5,
min_dim_size=1,
max_dim_size=10,
),
input_fill_dtype=_input_fill_and_dtype(),
test_with_out=st.just(False),
)
def test_numpy_full(
shape,
input_fill_dtype,
frontend,
backend_fw,
test_flags,
fn_tree,
on_device,
):
input_dtype, x, fill, dtype_to_cast = input_fill_dtype
helpers.test_frontend_function(
input_dtypes=input_dtype,
frontend=frontend,
backend_to_test=backend_fw,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
shape=shape,
fill_value=fill,
dtype=dtype_to_cast,
)
* We use :func:`helpers.get_dtypes` to generate :code:`dtype`, these are valid numeric data types specifically for NumPy.
* :func:`numpy.full` does not have a :code:`where` argument so we can use :func:`helpers.test_frontend_function`, we specify the `out` flag explicitly.
**TensorFlow**
.. code-block:: python
# ivy_tests/test_ivy/test_frontends/test_tensorflow/test_raw_ops.py
@st.composite
def _fill_value(draw):
dtype = draw(_dtypes())[0]
with update_backend(test_globals.CURRENT_BACKEND) as ivy_backend:
if ivy_backend.is_uint_dtype(dtype):
return draw(helpers.ints(min_value=0, max_value=5))
elif ivy_backend.is_int_dtype(dtype):
return draw(helpers.ints(min_value=-5, max_value=5))
return draw(helpers.floats(min_value=-5, max_value=5))
# fill
@handle_frontend_test(
fn_tree="tensorflow.raw_ops.Fill",
shape=helpers.get_shape(
allow_none=False,
min_num_dims=1,
min_dim_size=1,
),
fill_value=_fill_value(),
dtypes=_dtypes(),
test_with_out=st.just(False),
)
def test_tensorflow_Fill( # NOQA
*,
shape,
fill_value,
dtypes,
frontend,
backend_fw,
test_flags,
fn_tree,
on_device,
):
helpers.test_frontend_function(
input_dtypes=dtypes,
frontend=frontend,
backend_to_test=backend_fw,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
rtol=1e-05,
dims=shape,
value=fill_value,
)
* We use :func:`helpers.get_dtypes` to generate :code:`dtype`, these are valid numeric data types specifically for this function.
* Tensorflow's version of :func:`full` is named :func:`Fill` therefore we specify the :code:`fn_tree` argument to be :code:`"Fill"`
* When running the test there were some small discrepancies between the values so we can use :code:`rtol` to specify the relative tolerance. We specify the `out` flag explicitly.
**PyTorch**
.. code-block:: python
# ivy_tests/test_ivy/test_frontends/test_torch/test_creation_ops.py
@st.composite
def _fill_value(draw):
with_array = draw(st.sampled_from([True, False]))
dtype = draw(st.shared(helpers.get_dtypes("numeric", full=False), key="dtype"))[0]
with update_backend(test_globals.CURRENT_BACKEND) as ivy_backend:
if ivy_backend.is_uint_dtype(dtype):
ret = draw(helpers.ints(min_value=0, max_value=5))
elif ivy_backend.is_int_dtype(dtype):
ret = draw(helpers.ints(min_value=-5, max_value=5))
else:
ret = draw(helpers.floats(min_value=-5, max_value=5))
if with_array:
return np.array(ret, dtype=dtype)
else:
return ret
@handle_frontend_test(
fn_tree="torch.full",
shape=helpers.get_shape(
allow_none=False,
min_num_dims=1,
max_num_dims=5,
min_dim_size=1,
max_dim_size=10,
),
fill_value=_fill_value(),
dtype=st.shared(helpers.get_dtypes("numeric", full=False), key="dtype"),
)
def test_torch_full(
*,
shape,
fill_value,
dtype,
on_device,
fn_tree,
frontend,
backend_fw,
test_flags,
):
helpers.test_frontend_function(
input_dtypes=dtype,
on_device=on_device,
frontend=frontend,
backend_to_test=backend_fw,
test_flags=test_flags,
fn_tree=fn_tree,
size=shape,
fill_value=fill_value,
dtype=dtype[0],
device=on_device,
)
* We use :code:`helpers.get_dtypes` to generate :code:`dtype`, these are valid numeric data types specifically for Torch.
Testing Without Using Tests Values
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
While even using hypothesis, there are some cases in which we set :code:`test_values=False` for example, we have a
function add_noise() and we call it on x and we try to assert (we internally use assert np.all_close) that the result
from torch backend matches tensorflow and the test will always fail, because the function add_noise() depends on a random
seed internally that we have no control over, what we change is only how we test for equality, in which in that case
we can not and we have to reconstruct the output as shown in the example below.
.. code-block:: python
# ivy_tests/test_ivy/test_frontends/test_torch/test_linalg.py
@handle_frontend_test(
fn_tree="torch.linalg.qr",
dtype_and_input=_get_dtype_and_matrix(batch=True),
)
def test_torch_qr(
*,
dtype_and_input,
frontend,
test_flags,
fn_tree,
backend_fw,
on_device,
):
input_dtype, x = dtype_and_input
ret, frontend_ret = helpers.test_frontend_function(
input_dtypes=input_dtype,
backend_to_test=backend_fw,
frontend=frontend,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
A=x[0],
test_values=False,
)
with update_backend(backend_fw) as ivy_backend:
ret = [ivy_backend.to_numpy(x) for x in ret]
frontend_ret = [np.asarray(x) for x in frontend_ret]
q, r = ret
frontend_q, frontend_r = frontend_ret
assert_all_close(
ret_np=q @ r,
ret_from_gt_np=frontend_q @ frontend_r,
rtol=1e-2,
atol=1e-2,
backend=backend_fw,
ground_truth_backend=frontend,
)
* The parameter :code:`test_values=False` is explicitly set to "False" as there can be multiple solutions for this and those multiple solutions can all be correct, so we have to test by reconstructing the output.
What assert_all_close() actually does is, it checks for values and dtypes, if even one of them is not the same it will cause
an assertion, the examples given below will make it clearer.
.. code-block:: python
>>> a = np.array([[1., 5.]], dtype='float32')
>>> b = np.array([[2., 4.]], dtype='float32')
>>> print(helpers.assert_all_close(a, b))
AssertionError: [[1. 5.]] != [[2. 4.]]
.. code-block:: python
>>> a = np.array([[1., 5.]], dtype='float64')
>>> b = np.array([[2., 4.]], dtype='float32')
>>> print(helpers.assert_all_close(a, b))
AssertionError: the return with a TensorFlow backend produced a data type of float32, while the return with a backend returned a data type of float64.
Alias functions
^^^^^^^^^^^^^^^
Let's take a quick walkthrough on testing the function alias as we know that such functions have the same behavior as original functions.
For example :func:`torch_frontend.greater` has an alias function :func:`torch_frontend.gt` which we need to make sure that it is working the same as the targeted framework function :func:`torch.greater` and :func:`torch.gt`.
Code example for alias function:
.. code-block:: python
# in ivy/functional/frontends/torch/comparison_ops.py
@to_ivy_arrays_and_back
def greater(input, other, *, out=None):
input, other = torch_frontend.promote_types_of_torch_inputs(input, other)
return ivy.greater(input, other, out=out
gt = greater
* As you can see the :func:`torch_frontend.gt` is an alias to :func:`torch_frontend.greater` and below is how we update the unit test of :func:`torch_frontend.greater` to test the alias function as well.
**PyTorch**
.. code-block:: python
# ivy_tests/test_ivy/test_frontends/test_torch/test_comparison_ops.py
@handle_frontend_test(
fn_tree="torch.gt",
aliases=["torch.greater"],
dtype_and_inputs=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("float"),
num_arrays=2,
allow_inf=False,
shared_dtype=True,
),
)
def test_torch_greater(
*,
dtype_and_inputs,
on_device,
fn_tree,
frontend,
backend_fw,
test_flags,
):
input_dtype, inputs = dtype_and_inputs
helpers.test_frontend_function(
input_dtypes=input_dtype,
frontend=frontend,
backend_to_test=backend_fw,
test_flags=test_flags,
fn_tree=fn_tree,
on_device=on_device,
input=inputs[0],
other=inputs[1],
)
* We added a list of all aliases to the :code:`greater` function with a full namespace path such that when we are testing the original function we will test for the alias as well.
* During the frontend implementation, if a new alias is introduced you only need to go to the test function of the original frontend function and add that alias to :code:`all_aliases` argument in the :func:`test_frontend_function` helper with its full namespace.
Frontend Instance Method Tests
------------------------------
The frontend instance method tests are similar to the frontend function test, but instead of testing the function directly we test the instance method of the frontend class.
major difference is that we have more flags to pass now, most initialization functions take an array as an input. also some methods may take an array as input,
for example, :code:`ndarray.__add__` would expect an array as input, despite the :code:`self.array`. and to make our test **complete** we need to generate separate flags for each.
**Important Helper Functions**
:func:`@handle_frontend_method` requires 3 keyword only parameters:
- :code:`class_tree` A full path to the array class in **Ivy** namespace.
- :code:`init_tree` A full path to initialization function.
- :code:`method_name` The name of the method to test.
:func:`helpers.test_frontend_method` is used to test frontend instance methods. It is used in the same way as :func:`helpers.test_frontend_function`. A few important arguments for this function are following:
- :code:`init_input_dtypes` Input dtypes of the arguments on which we are initializing the array on.
- :code:`init_all_as_kwargs_np` The data to be passed when initializing, this will be a dictionary in which the numpy array which will contain the data will be passed in the :code:`data` key.
- :code:`method_input_dtypes` The input dtypes of the argument which are to be passed to the instance method after the initialization of the array.
- :code:`method_all_as_kwargs_np` All the arguments which are to be passed to the instance method.
Frontend Instance Method Test Examples
--------------------------------------
ivy.add()
^^^^^^^^^
**NumPy**
.. code-block:: python
# ivy_tests/test_ivy/test_frontends/test_numpy/test_ndarray.py
@handle_frontend_method(
class_tree=CLASS_TREE,
init_tree="numpy.array",
method_name="__add__",
dtype_and_x=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("numeric"), num_arrays=2
),
)
def test_numpy_instance_add__(
dtype_and_x,
frontend_method_data,
init_flags,
method_flags,
frontend,
backend_fw,
):
input_dtypes, xs = dtype_and_x
helpers.test_frontend_method(
init_input_dtypes=input_dtypes,
init_all_as_kwargs_np={
"object": xs[0],
},
method_input_dtypes=input_dtypes,
method_all_as_kwargs_np={
"value": xs[1],
},
frontend=frontend,
backend_to_test=backend_fw,
frontend_method_data=frontend_method_data,
init_flags=init_flags,
method_flags=method_flags,
)
* We specify the :code:`class_tree` to be :meth:`ivy.functional.frontends.numpy.array` which is the path to the class in ivy namespace.
* We specify the function that is used to initialize the array, for jax, we use :code:`numpy.array` to create a :code:`numpy.ndarray`.
* We specify the :code:`method_name` to be :meth:`__add__` which is the path to the method in the frontend class.
**TensorFlow**
.. code-block:: python
# ivy_tests/test_ivy/test_frontends/test_tensorflow/test_tensor.py
@handle_frontend_method(
class_tree=CLASS_TREE,
init_tree="tensorflow.constant",
method_name="__add__",
dtype_and_x=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("numeric"),
num_arrays=2,
shared_dtype=True,
),
)
def test_tensorflow_instance_add(
dtype_and_x,
frontend,
backend_fw,
frontend_method_data,
init_flags,
method_flags,
):
input_dtype, x = dtype_and_x
helpers.test_frontend_method(
init_input_dtypes=input_dtype,
init_all_as_kwargs_np={
"value": x[0],
},
method_input_dtypes=input_dtype,
method_all_as_kwargs_np={
"y": x[1],
},
frontend=frontend,
backend_to_test=backend_fw,
frontend_method_data=frontend_method_data,
init_flags=init_flags,
method_flags=method_flags,
)
* We specify the function that is used to initialize the array, for TensorFlow, we use :code:`tensorflow.constant` to create a :code:`tensorflow.EagerTensor`.
* We specify the :code:`method_tree` to be :meth:`tensorflow.EagerTensor.__add__` which is the path to the method in the frontend class.
**PyTorch**
.. code-block:: python
# ivy_tests/test_ivy/test_frontends/test_torch/test_tensor.py
@handle_frontend_method(
class_tree=CLASS_TREE,
init_tree="torch.tensor",
method_name="add",
dtype_and_x=helpers.dtype_and_values(
available_dtypes=helpers.get_dtypes("float"),
num_arrays=2,
min_value=-1e04,
max_value=1e04,
allow_inf=False,
),
alpha=st.floats(min_value=-1e04, max_value=1e04, allow_infinity=False),
)
def test_torch_instance_add(
dtype_and_x,
alpha,
frontend,
backend_fw,
frontend_method_data,
init_flags,
method_flags,
):
input_dtype, x = dtype_and_x
helpers.test_frontend_method(
init_input_dtypes=input_dtype,
init_all_as_kwargs_np={
"data": x[0],
},
method_input_dtypes=input_dtype,
method_all_as_kwargs_np={
"other": x[1],
"alpha": alpha,
},
frontend_method_data=frontend_method_data,
init_flags=init_flags,
method_flags=method_flags,
frontend=frontend,
backend_to_test=backend_fw,
atol_=1e-02,
)
* We specify the function that is used to initialize the array, for PyTorch, we use :code:`torch.tensor` to create a :code:`torch.Tensor`.
* We specify the :code:`method_tree` to be :meth:`torch.Tensor.__add__` which is the path to the method in the frontend class.
Hypothesis Helpers
------------------
Naturally, many of the functions in the various frontend APIs are very similar to many of the functions in the Ivy API.
Therefore, the unit tests will follow very similar structures with regards to the data generated for testing.
There are many data generation helper functions defined in the Ivy API test files, such as :func:`_arrays_idx_n_dtypes` defined in :mod:`ivy/ivy_tests/test_ivy/test_functional/test_core/test_manipulation.py`.
This helper generates: a set of concatenation-compatible arrays, the index for the concatenation, and the data types of each array.
Not surprisingly, this helper is used for testing :func:`ivy.concat`, as shown `here <https://github.com/unifyai/ivy/blob/86287f4e45bbe581fe54e37d5081c684130cba2b/ivy_tests/test_ivy/test_functional/test_core/test_manipulation.py#L53>`_.
Clearly, this helper would also be very useful for testing the various frontend concatenation functions, such as :code:`jax.numpy.concatenate`, :code:`numpy.concatenate`, :code:`tensorflow.concat` and :code:`torch.cat`.
We could simply copy and paste the implementation from :mod:`/ivy_tests/test_ivy/test_functional/test_core/test_manipulation.py` into each file :mod:`/ivy_tests/test_ivy/test_frontends/test_<framework>/test_<group>.py`, but this would result in needless duplication.
Instead, we should simply import the helper function from the ivy test file into the frontend test file, like so :code:`from ivy_tests.test_ivy.test_frontends.test_manipulation import _arrays_idx_n_dtypes`.
In cases where a helper function is uniquely useful for a frontend function without being useful for an Ivy function, then it should be implemented directly in :mod:`/ivy_tests/test_ivy/test_frontends/test_<framework>/test_<group>.py` rather than in :mod:`/ivy_tests/test_ivy/test_functional/test_core/test_<closest_relevant_group>.py`.
However, as shown above, in many cases the same helper function can be shared between the Ivy API tests and the frontend tests, and we should strive for as much sharing as possible to minimize the amount of code.
**Running Ivy Frontend Tests**
The CI Pipeline runs the entire collection of Frontend Tests for the frontend that is being updated on every push to the repo.
You will need to make sure the Frontend Test is passing for each Ivy Frontend function you introduce/modify.
If a test fails on the CI, you can see details about the failure under `Details -> Run Frontend Tests` as shown in `CI Pipeline`_.
You can also run the tests locally before making a PR. See the relevant :ref:`overview/contributing/setting_up:Setting Up Testing in PyCharm` section for instructions on how to do so.
Frontend Framework Testing Configuration
----------------------------------------
To effectively test a frontend within our pipeline, it is essential to provide specific information about the framework we're trying to test.
This information includes how to create an array, return type checking, supported devices, and data types, etc.
All the required information for a frontend is stored in a configuration file, which serves as a reference for our testing pipeline.
The process of incorporating a new frontend into our testing procedure involves simply writing a new config file for that framework.
The configuration files are located at: :code:`ivy_tests/test_ivy/test_frontends/config/`
**Round Up**
This should have hopefully given you a good understanding of Ivy Frontend Tests!
If you have any questions, please feel free to reach out on `discord`_ in the `ivy frontends tests thread`_!
**Video**
.. raw:: html
<iframe width="420" height="315" allow="fullscreen;"
src="https://www.youtube.com/embed/iS7QFsQa9bI" class="video">
</iframe>
| ivy/docs/overview/deep_dive/ivy_frontends_tests.rst/0 | {
"file_path": "ivy/docs/overview/deep_dive/ivy_frontends_tests.rst",
"repo_id": "ivy",
"token_count": 14275
} | 5 |
.. _`RWorks Multi-Vendor Compiler Frameworks`:
Multi-Vendor Compiler Frameworks
================================
.. _`Tensor Virtual Machine (TVM)`: https://tvm.apache.org/
.. _`actively exploring`: https://discuss.tvm.apache.org/t/google-lasted-work-mlir-primer/1721
.. _`MLIR`: https://mlir.llvm.org/
.. _`Accelerated Linear Algebra (XLA)`: https://www.tensorflow.org/xla
.. _`TensorFlow`: https://www.tensorflow.org/
.. _`JAX`: https://jax.readthedocs.io/
.. _`PyTorch`: https://pytorch.org/
.. _`Julia`: https://julialang.org/
.. _`GNU Compiler Collection (GCC)`: https://gcc.gnu.org/git/gcc.git
.. _`GNU Project`: https://www.gnu.org/
.. _`Free Software Foundation (FSF)`: https://www.fsf.org/
.. _`discord`: https://discord.gg/sXyFF8tDtm
The compiler frameworks explained below enable Machine Learning code to be executed on a variety of hardware targets, with abstractions selected carefully in order to simplify this process and reduce the implementational overhead for supporting many different end targets.
In general, these multi-target compiler frameworks can also make use of compiler infrastructure such as that explained in the previous section, in order to follow best practices, streamline the design, and maximize interoperability.
Apache TVM
----------
Apache's `Tensor Virtual Machine (TVM)`_ is an open source machine learning compiler framework for CPUs, GPUs, and machine learning accelerators which aims to enable machine learning engineers to optimize and run computations efficiently on any hardware backend.
It enables the compilation of deep learning models into minimum deployable modules, and it provides the infrastructure to automatically generate and optimize models on more backends with better performance.
Apache TVM is an incredibly useful framework, which simplifies Machine Learning deployment to various hardware vendors.
TVM is `actively exploring`_ the potential integration of `MLIR`_ principles into the design.
XLA
---
`Accelerated Linear Algebra (XLA)`_ is a compiler for linear algebra that can accelerate models with potentially no source code changes.
The results are improvements in speed and memory usage.
Conventionally, when ML programs are run, all of the operations are executed individually on the target device.
In the case of GPU execution, each operation has a precompiled GPU kernel implementation that the executor dispatches to.
XLA provides an alternative mode of running models: it compiles the graph into a sequence of computation kernels generated specifically for the given model.
Because these kernels are unique to the model, they can exploit model-specific information for optimization.
XLA is supported by `TensorFlow`_, `JAX`_, `PyTorch`_ and the `Julia`_ language, and is able to compile to TPUs, GPUs, and CPUs.
GCC
---
The `GNU Compiler Collection (GCC)`_ is an optimizing compiler produced by the `GNU Project`_ supporting various programming languages, hardware architectures, and operating systems.
The `Free Software Foundation (FSF)`_ distributes GCC as free software under the GNU General Public License (GNU GPL).
GCC is a key component of the GNU toolchain and the standard compiler for most projects related to GNU and the Linux kernel.
With roughly 15 million lines of code in 2019, GCC is one of the biggest free programs in existence, and it has played an important role in the growth of free software, as both a tool and an example.
| ivy/docs/overview/related_work/multi_vendor_compiler_frameworks.rst/0 | {
"file_path": "ivy/docs/overview/related_work/multi_vendor_compiler_frameworks.rst",
"repo_id": "ivy",
"token_count": 869
} | 6 |
# global
import abc
from typing import Optional, Union, Literal
# local
import ivy
# ToDo: implement all methods here as public instance methods
class _ArrayWithActivations(abc.ABC):
def relu(
self: ivy.Array,
/,
*,
complex_mode: Literal["split", "magnitude", "jax"] = "jax",
out: Optional[ivy.Array] = None,
) -> ivy.Array:
"""ivy.Array instance method variant of ivy.relu. This method simply
wraps the function, and so the docstring for ivy.relu also applies to
this method with minimal changes.
Parameters
----------
self
input array.
complex_mode
optional specifier for how to handle complex data types. See
``ivy.func_wrapper.handle_complex_input`` for more detail.
out
optional output array, for writing the result to. It must have a shape
that the inputs broadcast to.
Returns
-------
ret
an array with the relu activation function applied element-wise.
Examples
--------
>>> x = ivy.array([-1., 0., 1.])
>>> y = x.relu()
>>> print(y)
ivy.array([0., 0., 1.])
"""
return ivy.relu(self._data, complex_mode=complex_mode, out=out)
def leaky_relu(
self: ivy.Array,
/,
*,
alpha: float = 0.2,
complex_mode: Literal["split", "magnitude", "jax"] = "jax",
out: Optional[ivy.Array] = None,
) -> ivy.Array:
"""ivy.Array instance method variant of ivy.leaky_relu. This method
simply wraps the function, and so the docstring for ivy.leaky_relu also
applies to this method with minimal changes.
Parameters
----------
self
input array.
alpha
the slope of the negative section.
complex_mode
optional specifier for how to handle complex data types. See
``ivy.func_wrapper.handle_complex_input`` for more detail.
out
optional output array, for writing the result to. It must have a shape
that the inputs broadcast to.
Returns
-------
ret
an array with the leaky relu activation function applied element-wise.
Examples
--------
>>> x = ivy.array([0.39, -0.85])
>>> y = x.leaky_relu()
>>> print(y)
ivy.array([ 0.39, -0.17])
"""
return ivy.leaky_relu(
self._data, alpha=alpha, complex_mode=complex_mode, out=out
)
def gelu(
self: ivy.Array,
/,
*,
approximate: bool = False,
complex_mode: Literal["split", "magnitude", "jax"] = "jax",
out: Optional[ivy.Array] = None,
) -> ivy.Array:
"""ivy.Array instance method variant of ivy.gelu. This method simply
wraps the function, and so the docstring for ivy.gelu also applies to
this method with minimal changes.
Parameters
----------
self
input array.
approximate
whether to use the approximate version of the gelu function.
complex_mode
optional specifier for how to handle complex data types. See
``ivy.func_wrapper.handle_complex_input`` for more detail.
out
optional output array, for writing the result to. It must have a shape
that the inputs broadcast to.
Returns
-------
ret
an array with the gelu activation function applied element-wise.
Examples
--------
>>> x = ivy.array([-1.2, -0.6, 1.5])
>>> y = x.gelu()
>>> print(y)
ivy.array([-0.138, -0.165, 1.4])
"""
return ivy.gelu(
self._data, approximate=approximate, complex_mode=complex_mode, out=out
)
def sigmoid(
self: ivy.Array,
/,
*,
complex_mode: Literal["split", "magnitude", "jax"] = "jax",
out: Optional[ivy.Array] = None,
) -> ivy.Array:
"""ivy.Array instance method variant of ivy.sigmoid.
This method simply wraps the function, and so the docstring for ivy.sigmoid also
applies to this method with minimal changes.
Parameters
----------
self
Input array
complex_mode
optional specifier for how to handle complex data types. See
``ivy.func_wrapper.handle_complex_input`` for more detail.
out
optional output array for writing the result to. It must have the same shape
the input broadcast to default: None
Returns
-------
ret
an array with the sigmoid activation function applied element-wise.
Examples
--------
>>> x = ivy.array([-1., 1., 2.])
>>> y = x.sigmoid()
>>> print(y)
ivy.array([0.269, 0.731, 0.881])
"""
return ivy.sigmoid(self._data, complex_mode=complex_mode, out=out)
def softmax(
self: ivy.Array,
/,
*,
axis: Optional[int] = None,
complex_mode: Literal["split", "magnitude", "jax"] = "jax",
out: Optional[ivy.Array] = None,
) -> ivy.Array:
"""ivy.Array instance method variant of ivy.softmax. This method simply
wraps the function, and so the docstring for ivy.softmax also applies
to this method with minimal changes.
Parameters
----------
self
input array.
axis
the axis or axes along which the softmax should be computed
complex_mode
optional specifier for how to handle complex data types. See
``ivy.func_wrapper.handle_complex_input`` for more detail.
out
optional output array, for writing the result to. It must have a shape
that the inputs broadcast to.
Returns
-------
ret
an array with the softmax activation function applied element-wise.
Examples
--------
>>> x = ivy.array([1.0, 0, 1.0])
>>> y = x.softmax()
>>> print(y)
ivy.array([0.422, 0.155, 0.422])
"""
return ivy.softmax(self._data, axis=axis, complex_mode=complex_mode, out=out)
def softplus(
self: ivy.Array,
/,
*,
beta: Optional[Union[int, float]] = None,
threshold: Optional[Union[int, float]] = None,
complex_mode: Literal["split", "magnitude", "jax"] = "jax",
out: Optional[ivy.Array] = None,
) -> ivy.Array:
"""ivy.Array instance method variant of ivy.softplus. This method
simply wraps the function, and so the docstring for ivy.softplus also
applies to this method with minimal changes.
Parameters
----------
self
input array.
beta
the beta parameter of the softplus function.
threshold
the threshold parameter of the softplus function.
complex_mode
optional specifier for how to handle complex data types. See
``ivy.func_wrapper.handle_complex_input`` for more detail.
out
optional output array, for writing the result to. It must have a shape
Returns
-------
ret
an array with the softplus activation function applied element-wise.
Examples
--------
>>> x = ivy.array([-0.3461, -0.6491])
>>> y = x.softplus()
>>> print(y)
ivy.array([0.535,0.42])
>>> x = ivy.array([-0.3461, -0.6491])
>>> y = x.softplus(beta=0.5)
>>> print(y)
ivy.array([1.22, 1.09])
>>> x = ivy.array([1.31, 2., 2.])
>>> y = x.softplus(threshold=2, out=x)
>>> print(x)
ivy.array([1.55, 2.13, 2.13])
"""
return ivy.softplus(
self._data,
beta=beta,
threshold=threshold,
complex_mode=complex_mode,
out=out,
)
def log_softmax(
self: ivy.Array,
/,
*,
axis: Optional[int] = -1,
complex_mode: Literal["split", "magnitude", "jax"] = "jax",
out: Optional[ivy.Array] = None,
) -> ivy.Array:
"""ivy.Array instance method variant of ivy.log_softmax. This method
simply wraps the function, and so the docstring for ivy.log_softmax
also applies to this method with minimal changes.
Parameters
----------
self
input array.
axis
the axis or axes along which the log_softmax should be computed
complex_mode
optional specifier for how to handle complex data types. See
``ivy.func_wrapper.handle_complex_input`` for more detail.
out
optional output array, for writing the result to. It must have a shape
that the inputs broadcast to.
Returns
-------
ret
an array with the log_softmax activation function applied element-wise.
Examples
--------
>>> x = ivy.array([-1.0, -0.98, 2.3])
>>> y = x.log_softmax()
>>> print(y)
ivy.array([-3.37, -3.35, -0.0719])
>>> x = ivy.array([2.0, 3.4, -4.2])
>>> y = x.log_softmax(x)
ivy.array([-1.62, -0.221, -7.82 ])
"""
return ivy.log_softmax(
self._data,
axis=axis,
complex_mode=complex_mode,
out=out,
)
def mish(
self: ivy.Array,
/,
*,
complex_mode: Literal["split", "magnitude", "jax"] = "jax",
out: Optional[ivy.Array] = None,
) -> ivy.Array:
"""ivy.Array instance method variant of ivy.mish. This method simply
wraps the function, and so the docstring for ivy.mish also applies to
this method with minimal changes.
Parameters
----------
self
input array.
complex_mode
optional specifier for how to handle complex data types. See
``ivy.func_wrapper.handle_complex_input`` for more detail.
out
optional output array, for writing the result to. It must have a shape
that the inputs broadcast to.
Examples
--------
>>> x = ivy.array([-1., 0., 1.])
>>> y = x.mish()
>>> print(y)
ivy.array([-0.30340147, 0. , 0.86509842])
"""
return ivy.mish(self._data, complex_mode=complex_mode, out=out)
def hardswish(
self: ivy.Array,
/,
*,
complex_mode: Literal["split", "magnitude", "jax"] = "jax",
out: Optional[ivy.Array] = None,
) -> ivy.Array:
"""Apply the hardswish activation function element-wise.
Parameters
----------
x
input array
complex_mode
optional specifier for how to handle complex data types. See
``ivy.func_wrapper.handle_complex_input`` for more detail.
out
optional output array, for writing the result to. It must have
a shape that the inputs broadcast to.
Returns
-------
ret
an array containing the hardswish activation of each element in ``x``.
Examples
--------
With :class:`ivy.Array` input:
>>> x = ivy.array([0., 0., 4.])
>>> y = ivy.hardswish(x)
>>> y
ivy.array([0., 0., 4.])
With :class:`ivy.Container` input:
>>> x = ivy.Container(a=ivy.array([-3., 4., 5.]), b=ivy.array([0., 5.]))
>>> x = ivy.hardswish(x, out=x)
>>> x
{
a: ivy.array([-0., 4., 5.]),
b: ivy.array([0., 5.])
}
"""
return ivy.hardswish(self._data, complex_mode=complex_mode, out=out)
| ivy/ivy/data_classes/array/activations.py/0 | {
"file_path": "ivy/ivy/data_classes/array/activations.py",
"repo_id": "ivy",
"token_count": 5617
} | 7 |
# global
import abc
class _ArrayWithImageExperimental(abc.ABC):
pass
| ivy/ivy/data_classes/array/experimental/image.py/0 | {
"file_path": "ivy/ivy/data_classes/array/experimental/image.py",
"repo_id": "ivy",
"token_count": 26
} | 8 |
# global
import abc
from typing import Union, Optional, Literal, Tuple, List, Sequence
# local
import ivy
inf = float("inf")
class _ArrayWithLinearAlgebra(abc.ABC):
def matmul(
self: ivy.Array,
x2: Union[ivy.Array, ivy.NativeArray],
/,
*,
transpose_a: bool = False,
transpose_b: bool = False,
adjoint_a: bool = False,
adjoint_b: bool = False,
out: Optional[ivy.Array] = None,
) -> ivy.Array:
"""ivy.Array instance method variant of ivy.matmul. This method simply
wraps the function, and so the docstring for ivy.matmul also applies to
this method with minimal changes.
Parameters
----------
self
first input array. Should have a numeric data type. Must have at least one
dimension.
x2
second input array. Should have a numeric data type. Must have at least one
dimension.
transpose_a
if True, ``x1`` is transposed before multiplication.
transpose_b
if True, ``x2`` is transposed before multiplication.
adjoint_a
If True, takes the conjugate of the matrix then the transpose of the matrix.
adjoint_a and transpose_a can not be true at the same time.
adjoint_b
If True, takes the conjugate of the matrix then the transpose of the matrix.
adjoint_b and transpose_b can not be true at the same time.
out
optional output array, for writing the result to. It must have a
shape that the inputs broadcast to.
Returns
-------
ret
An array containing the output of matrix multiplication. The returned array
must have a data type determined by :ref:`type-promotion`. More details
can be found in ivy.matmul.
Examples
--------
With :class:`ivy.Array` instance inputs:
>>> x = ivy.array([1., 4.])
>>> y = ivy.array([3., 2.])
>>> z = x.matmul(y)
>>> print(z)
ivy.array(11.)
"""
return ivy.matmul(
self._data,
x2,
transpose_a=transpose_a,
transpose_b=transpose_b,
adjoint_a=adjoint_a,
adjoint_b=adjoint_b,
out=out,
)
def cholesky(
self: ivy.Array,
/,
*,
upper: bool = False,
out: Optional[ivy.Array] = None,
) -> ivy.Array:
"""ivy.Array instance method variant of ivy.cholesky. This method
simply wraps the function, and so the docstring for ivy.cholesky also
applies to this method with minimal changes.
Parameters
----------
self
input array having shape (..., M, M) and whose innermost two dimensions form
square symmetric positive-definite matrices. Should have a floating-point
data type.
upper
If True, the result must be the upper-triangular Cholesky factor U. If
False, the result must be the lower-triangular Cholesky factor L.
Default: ``False``.
out
optional output array, for writing the result to. It must have a shape that
the inputs broadcast to.
Returns
-------
ret
an array containing the Cholesky factors for each square matrix. If upper is
False, the returned array must contain lower-triangular matrices; otherwise,
the returned array must contain upper-triangular matrices. The returned
array must have a floating-point data type determined by Type Promotion
Rules and must have the same shape as self.
Examples
--------
>>> x = ivy.array([[4.0, 1.0, 2.0, 0.5, 2.0],
... [1.0, 0.5, 0.0, 0.0, 0.0],
... [2.0, 0.0, 3.0, 0.0, 0.0],
... [0.5, 0.0, 0.0, 0.625, 0.0],
... [2.0, 0.0, 0.0, 0.0, 16.0]])
>>> y = x.cholesky(upper='false')
>>> print(y)
ivy.array([[ 2. , 0.5 , 1. , 0.25, 1. ],
... [ 0. , 0.5 , -1. , -0.25, -1. ],
... [ 0. , 0. , 1. , -0.5 , -2. ],
... [ 0. , 0. , 0. , 0.5 , -3. ],
... [ 0. , 0. , 0. , 0. , 1. ]])
"""
return ivy.cholesky(self._data, upper=upper, out=out)
def cross(
self: ivy.Array,
x2: Union[ivy.Array, ivy.NativeArray],
/,
*,
axis: int = -1,
out: Optional[ivy.Array] = None,
) -> ivy.Array:
"""ivy.Array instance method variant of ivy.cross. This method simply
wraps the function, and so the docstring for ivy.cross also applies to
this method with minimal changes.
Parameters
----------
self
first input array. Should have a numeric data type.
x2
second input array. Must be compatible with ``self``
(see :ref:`broadcasting`). Should have a numeric data type.
axis
the axis (dimension) of x1 and x2 containing the vectors for which to
compute (default: -1) the cross product.vIf set to -1, the function
computes the cross product for vectors defined by the last axis (dimension).
Default: ``-1``.
out
optional output array, for writing the result to. It must have a shape that
the inputs broadcast to.
Returns
-------
ret
an array containing the element-wise products. The returned array must
have a data type determined by :ref:`type-promotion`.
Examples
--------
With :class:`ivy.Array` instance inputs:
>>> x = ivy.array([1., 0., 0.])
>>> y = ivy.array([0., 1., 0.])
>>> z = x.cross(y)
>>> print(z)
ivy.array([0., 0., 1.])
"""
return ivy.cross(self._data, x2, axis=axis, out=out)
def det(self: ivy.Array, /, *, out: Optional[ivy.Array] = None) -> ivy.Array:
"""
Examples
--------
>>> x = ivy.array([[2.,4.],[6.,7.]])
>>> y = x.det()
>>> print(y)
ivy.array(-10.)
"""
return ivy.det(self._data, out=out)
def diagonal(
self: ivy.Array,
/,
*,
offset: int = 0,
axis1: int = -2,
axis2: int = -1,
out: Optional[ivy.Array] = None,
) -> ivy.Array:
"""ivy.Array instance method variant of ivy.diagonal. This method
simply wraps the function, and so the docstring for ivy.diagonal also
applies to this method with minimal changes.
Parameters
----------
self
input array having shape ``(..., M, N)`` and whose innermost two
dimensions form ``MxN`` matrices.
offset
offset specifying the off-diagonal relative to the main diagonal.
- ``offset = 0``: the main diagonal.
- ``offset > 0``: off-diagonal above the main diagonal.
- ``offset < 0``: off-diagonal below the main diagonal.
Default: `0`.
axis1
axis to be used as the first axis of the 2-D sub-arrays from
which the diagonals should be taken. Defaults to first axis (-2).
axis2
axis to be used as the second axis of the 2-D sub-arrays from which
the diagonals should be taken. Defaults to second axis (-1).
out
optional output array, for writing the result to. It must have
a shape that the inputs broadcast to.
Returns
-------
ret
an array containing the diagonals and whose shape is determined
by removing the last two dimensions and appending a dimension equal
to the size of the resulting diagonals. The returned array must
have the same data type as ``x``.
Examples
--------
With :class:`ivy.Array` inputs:
>>> x = ivy.array([[1., 2.],
... [3., 4.]])
>>> d = x.diagonal()
>>> print(d)
ivy.array([1., 4.])
>>> x = ivy.array([[[1., 2.],
... [3., 4.]],
... [[5., 6.],
... [7., 8.]]])
>>> d = x.diagonal()
>>> print(d)
ivy.array([[1., 4.],
[5., 8.]])
>>> x = ivy.array([[1., 2.],
... [3., 4.]])
>>> d = x.diagonal(offset=1)
>>> print(d)
ivy.array([2.])
>>> x = ivy.array([[0, 1, 2],
... [3, 4, 5],
... [6, 7, 8]])
>>> d = x.diagonal(offset=-1, axis1=0)
>>> print(d)
ivy.array([3, 7])
"""
return ivy.diagonal(
self._data, offset=offset, axis1=axis1, axis2=axis2, out=out
)
def diag(
self: ivy.Array,
/,
*,
k: int = 0,
out: Optional[ivy.Array] = None,
) -> ivy.Array:
"""ivy.Array instance method variant of ivy.diag. This method simply
wraps the function, and so the docstring for ivy.diag also applies to
this method with minimal changes.
Examples
--------
>>> x = ivy.array([[0, 1, 2],
>>> [3, 4, 5],
>>> [6, 7, 8]])
>>> x.diag(k=1)
ivy.array([1, 5])
"""
return ivy.diag(self._data, k=k, out=out)
def eig(
self: ivy.Array,
/,
*,
out: Optional[ivy.Array] = None,
) -> Tuple[ivy.Array]:
return ivy.eig(self._data, out=out)
def eigh(
self: ivy.Array,
/,
*,
UPLO: str = "L",
out: Optional[ivy.Array] = None,
) -> Tuple[ivy.Array]:
return ivy.eigh(self._data, UPLO=UPLO, out=out)
def eigvalsh(
self: ivy.Array,
/,
*,
UPLO: str = "L",
out: Optional[ivy.Array] = None,
) -> ivy.Array:
"""ivy.Array instance method variant of ivy.eigvalsh. This method
simply wraps the function, and so the docstring for ivy.eigvalsh also
applies to this method with minimal changes.
Parameters
----------
x
input array having shape (..., M, M) and whose innermost two dimensions form
square matrices. Must have floating-point data type.
out
optional output array, for writing the result to. It must have a shape that
the inputs broadcast to.
Returns
-------
ret
an array containing the computed eigenvalues. The returned array must have
shape (..., M) and have the same data type as x.
This function conforms to the `Array API Standard
<https://data-apis.org/array-api/latest/>`_. This docstring is an extension of
the `docstring <https://data-apis.org/array-api/latest/
extensions/generated/array_api.linalg.eigvalsh.html>`_
in the standard.
Both the description and the type hints above assumes an array input for
simplicity, but this function is *nestable*, and therefore also
accepts :class:`ivy.Container` instances in place of any of the arguments.
Examples
--------
With :class:`ivy.Array` inputs:
>>> x = ivy.array([[[1.0,2.0],[2.0,1.0]]])
>>> y = ivy.eigvalsh(x)
>>> print(y)
ivy.array([[-1., 3.]])
"""
return ivy.eigvalsh(self._data, UPLO=UPLO, out=out)
def inner(
self: ivy.Array,
x2: Union[ivy.Array, ivy.NativeArray],
/,
*,
out: Optional[ivy.Array] = None,
) -> ivy.Array:
"""Return the inner product of two vectors ``self`` and ``x2``.
Parameters
----------
self
first one-dimensional input array of size N.
Should have a numeric data type.
a(N,) array_like
First input vector. Input is flattened if not already 1-dimensional.
x2
second one-dimensional input array of size M.
Should have a numeric data type.
b(M,) array_like
Second input vector. Input is flattened if not already 1-dimensional.
out
optional output array, for writing the result to.
It must have a shape that the inputs broadcast to.
Returns
-------
ret
a two-dimensional array containing the inner product and whose
shape is (N, M).
The returned array must have a data type determined by Type Promotion Rules.
Examples
--------
Matrices of identical shapes
>>> x = ivy.array([[1., 2.], [3., 4.]])
>>> y = ivy.array([[5., 6.], [7., 8.]])
>>> d = x.inner(y)
>>> print(d)
ivy.array([[17., 23.], [39., 53.]])
# Matrices of different shapes
>>> x = ivy.array([[1., 2.], [3., 4.],[5., 6.]])
>>> y = ivy.array([[5., 6.], [7., 8.]])
>>> d = x.inner(y)
>>> print(d)
ivy.array([[17., 23.], [39., 53.], [61., 83.]])
# 3D matrices
>>> x = ivy.array([[[1., 2.], [3., 4.]],
... [[5., 6.], [7., 8.]]])
>>> y = ivy.array([[[9., 10.], [11., 12.]],
... [[13., 14.], [15., 16.]]])
>>> d = x.inner(y)
>>> print(d)
ivy.array([[[[ 29., 35.], [ 41., 47.]],
[[ 67., 81.], [ 95., 109.]]],
[[[105., 127.], [149., 171.]],
[[143., 173.], [203., 233.]]]])
"""
return ivy.inner(self._data, x2, out=out)
def inv(
self: ivy.Array, /, *, adjoint: bool = False, out: Optional[ivy.Array] = None
) -> ivy.Array:
"""ivy.Array instance method variant of ivy.inv. This method simply
wraps the function, and so the docstring for ivy.inv also applies to
this method with minimal changes.
Parameters
----------
self
input array having shape ``(..., M, M)`` and whose innermost two
dimensions form square matrices. Should have a floating-point data type.
out
optional output array, for writing the result to. It must have a
shape that the inputs broadcast to.
Returns
-------
ret
an array containing the multiplicative inverses. The returned array
must have a floating-point data type determined by :ref:`type-promotion`
and must have the same shape as ``x``.
Examples
--------
With :class:`ivy.Array` inputs:
>>> x = ivy.array([[1.0, 2.0],[3.0, 4.0]])
>>> y = x.inv()
>>> print(y)
ivy.array([[-2., 1.],[1.5, -0.5]])
"""
return ivy.inv(self._data, adjoint=adjoint, out=out)
def matrix_norm(
self: ivy.Array,
/,
*,
ord: Union[int, float, Literal[inf, -inf, "fro", "nuc"]] = "fro",
axis: Tuple[int, int] = (-2, -1),
keepdims: bool = False,
out: Optional[ivy.Array] = None,
) -> ivy.Array:
"""ivy.Array instance method variant of ivy.matrix_norm. This method
simply wraps the function, and so the docstring for ivy.matrix_norm
also applies to this method with minimal changes.
Parameters
----------
self
Input array having shape (..., M, N) and whose innermost two dimensions
form MxN matrices. Should have a floating-point data type.
ord
Order of the norm. Default is "fro".
axis
specifies the axes that hold 2-D matrices. Default: (-2, -1).
keepdims
If this is set to True, the axes which are normed over are left in
the result as dimensions with size one. With this option the result will
broadcast correctly against the original x. Default is False.
out
optional output array, for writing the result to. It must have a shape
that the inputs broadcast to.
Returns
-------
ret
Matrix norm of the array at specified axes.
Examples
--------
>>> x = ivy.array([[1.1, 2.2, 3.3], [1.0, 2.0, 3.0]])
>>> y = x.matrix_norm(ord=1)
>>> print(y)
ivy.array(6.3)
>>> x = ivy.arange(8, dtype=float).reshape((2, 2, 2))
>>> y = x.matrix_norm(ord="nuc", keepdims=True)
>>> print(y)
ivy.array([[[ 4.24]],
[[11.4 ]]])
"""
return ivy.matrix_norm(
self._data, ord=ord, axis=axis, keepdims=keepdims, out=out
)
def matrix_power(
self: ivy.Array,
n: int,
/,
*,
out: Optional[ivy.Array] = None,
) -> ivy.Array:
return ivy.matrix_power(self._data, n, out=out)
def matrix_rank(
self: ivy.Array,
/,
*,
atol: Optional[Union[float, Tuple[float]]] = None,
rtol: Optional[Union[float, Tuple[float]]] = None,
hermitian: Optional[bool] = False,
out: Optional[ivy.Array] = None,
) -> ivy.Array:
"""ivy.Array instance method variant of ivy.matrix_rank. This method
returns the rank (i.e., number of non-zero singular values) of a matrix
(or a stack of matrices).
Parameters
----------
self
input array having shape ``(..., M, N)`` and whose innermost two dimensions
form ``MxN`` matrices. Should have a floating-point data type.
atol
absolute tolerance. When None it’s considered to be zero.
rtol
relative tolerance for small singular values. Singular values approximately
less than or equal to ``rtol * largest_singular_value`` are set to zero.
If a ``float``, the value is equivalent to a zero-dimensional array having
a floating-point data type determined by :ref:`type-promotion`
(as applied to ``x``) and must be broadcast against each matrix.
If an ``array``, must have a floating-point data type and must be
compatible with ``shape(x)[:-2]`` (see :ref:`broadcasting`).
If ``None``, the default value is ``max(M, N) * eps``, where ``eps`` must
be the machine epsilon associated with the floating-point data type
determined by :ref:`type-promotion` (as applied to ``x``).
Default: ``None``.
hermitian
indicates whether ``x`` is Hermitian. When ``hermitian=True``, ``x`` is
assumed to be Hermitian, enabling a more efficient method for finding
eigenvalues, but x is not checked inside the function. Instead, We just use
the lower triangular of the matrix to compute.
Default: ``False``.
out
optional output array, for writing the result to. It must have a shape that
the inputs broadcast to.
Returns
-------
ret
a container containing the ranks. The returned array must have a
floating-point data type determined by :ref:`type-promotion` and
must have shape ``(...)``
(i.e., must have a shape equal to ``shape(x)[:-2]``).
Examples
--------
1. Full Matrix
>>> x = ivy.array([[1., 2.], [3., 4.]])
>>> ivy.matrix_rank(x)
ivy.array(2.)
2. Rank Deficient Matrix
>>> x = ivy.array([[1., 0.], [0., 0.]])
>>> ivy.matrix_rank(x)
ivy.array(1.)
3. 1 Dimension - rank 1 unless all 0
>>> x = ivy.array([[1., 1.])
>>> ivy.matrix_rank(x)
ivy.array(1.)
>>> x = ivy.array([[0., 0.])
>>> ivy.matrix_rank(x)
ivy.array(0)
"""
return ivy.matrix_rank(
self._data, atol=atol, rtol=rtol, hermitian=hermitian, out=out
)
def matrix_transpose(
self: ivy.Array, /, *, conjugate: bool = False, out: Optional[ivy.Array] = None
) -> ivy.Array:
"""Transpose a matrix (or a stack of matrices) ``x``.
Parameters
----------
self
input array having shape ``(..., M, N)`` and whose innermost two
dimensions form ``MxN`` matrices.
out
optional output array, for writing the result to. It must have
a shape that the inputs broadcast to.
Returns
-------
ret
an array containing the transpose for each matrix and having shape
``(..., N, M)``. The returned array must have the same data
type as ``x``.
Examples
--------
With :class:`ivy.Array` instance inputs:
>>> x = ivy.array([[1., 2.], [0., 3.]])
>>> y = x.matrix_transpose()
>>> print(y)
ivy.array([[1., 0.],
[2., 3.]])
"""
return ivy.matrix_transpose(self._data, conjugate=conjugate, out=out)
def outer(
self: ivy.Array,
x2: Union[ivy.Array, ivy.NativeArray],
/,
*,
out: Optional[ivy.Array] = None,
) -> ivy.Array:
"""Compute the outer product between two arrays.
Parameters
----------
self : ivy.Array
The first input array.
x2 : ivy.Array or ivy.NativeArray
The second input array.
out : ivy.Array, optional
Output array. If provided, it must have the same
shape as the expected output.
Returns
-------
ivy.Array
The outer product of the two arrays.
Examples
--------
>>> x = ivy.array([1, 2, 3])
>>> y = ivy.array([4, 5])
>>> z = x.outer(y)
>>> print(z)
ivy.array([[ 4, 5],
[ 8, 10],
[12, 15]])
"""
return ivy.outer(self._data, x2, out=out)
def pinv(
self: ivy.Array,
/,
*,
rtol: Optional[Union[float, Tuple[float]]] = None,
out: Optional[ivy.Array] = None,
) -> ivy.Array:
"""ivy.Array instance method variant of ivy.pinv. This method simply
wraps the function, and so the docstring for ivy.pinv also applies to
this method with minimal changes.
Parameters
----------
self
input array having shape ``(..., M, N)`` and whose innermost two
dimensions form ``MxN`` matrices. Should have a floating-point data type.
rtol
relative tolerance for small singular values. More details in ivy.pinv.
out
optional output array, for writing the result to. It must have a
shape that the inputs broadcast to.
Returns
-------
ret
An array containing the pseudo-inverses. More details in ivy.pinv.
Examples
--------
>>> x = ivy.array([[1., 2.], [3., 4.]])
>>> y = x.pinv()
>>> print(y)
ivy.array([[-1.99999988, 1. ],
[ 1.5 , -0.5 ]])
>>> x = ivy.array([[1., 2.], [3., 4.]])
>>> z = ivy.zeros((2,2))
>>> x.pinv(rtol=0, out=z)
>>> print(z)
ivy.array([[-1.99999988, 1. ],
[ 1.5 , -0.5 ]])
"""
return ivy.pinv(self._data, rtol=rtol, out=out)
def qr(
self: ivy.Array,
/,
*,
mode: str = "reduced",
out: Optional[Tuple[ivy.Array, ivy.Array]] = None,
) -> Tuple[ivy.Array, ivy.Array]:
"""ivy.Array instance method variant of ivy.qr. This method simply
wraps the function, and so the docstring for ivy.qr also applies to
this method with minimal changes.
Returns the qr decomposition x = QR of a full column rank matrix (or a stack of
matrices), where Q is an orthonormal matrix (or a stack of matrices) and R is an
upper-triangular matrix (or a stack of matrices).
Parameters
----------
self
input array having shape (..., M, N) and whose innermost two dimensions form
MxN matrices of rank N. Should have a floating-point data type.
mode
decomposition mode. Should be one of the following modes:
- 'reduced': compute only the leading K columns of q, such that q and r have
dimensions (..., M, K) and (..., K, N), respectively, and where
K = min(M, N).
- 'complete': compute q and r with dimensions (..., M, M) and (..., M, N),
respectively.
Default: 'reduced'.
out
optional output tuple of arrays, for writing the result to. The arrays must
have shapes that the inputs broadcast to.
Returns
-------
ret
a namedtuple (Q, R) whose
- first element must have the field name Q and must be an array whose shape
depends on the value of mode and contain matrices with orthonormal columns.
If mode is 'complete', the array must have shape (..., M, M). If mode is
'reduced', the array must have shape (..., M, K), where K = min(M, N). The
first x.ndim-2 dimensions must have the same size as those of the input
array x.
- second element must have the field name R and must be an array whose shape
depends on the value of mode and contain upper-triangular matrices. If mode
is 'complete', the array must have shape (..., M, N). If mode is 'reduced',
the array must have shape (..., K, N), where K = min(M, N). The first
x.ndim-2 dimensions must have the same size as those of the input x.
Examples
--------
>>> x = ivy.array([[1.,2.,3.],[4.,5.,6.],[7.,8.,9.]])
>>> q, r = x.qr(mode='reduced')
>>> print(q)
ivy.array([[-0.12309149, 0.90453403, 0.40824829],
[-0.49236596, 0.30151134, -0.81649658],
[-0.86164044, -0.30151134, 0.40824829]])
>>> print(r)
ivy.array([[-8.12403841e+00,-9.60113630e+00, -1.10782342e+01],
[ 0.00000000e+00, 9.04534034e-01, 1.80906807e+00],
[ 0.00000000e+00, 0.00000000e+00, -8.88178420e-16]])
"""
return ivy.qr(self._data, mode=mode, out=out)
def slogdet(
self: ivy.Array,
) -> Tuple[ivy.Array, ivy.Array]:
"""ivy.Array instance method variant of ivy.slogdet. This method simply
wraps the function, and so the docstring for ivy.slogdet also applies
to this method with minimal changes.
Parameters
----------
self
input array having shape (..., M, M) and whose innermost two dimensions
form square matrices. Should have a floating-point data type.
Returns
-------
ret
This function returns NamedTuple with two values -
sign:
An array containing a number representing the sign of the determinant
for each square matrix.
logabsdet:
An array containing natural log of the absolute determinant of each
square matrix.
Examples
--------
>>> x = ivy.array([[1.0, 2.0],
... [3.0, 4.0]])
>>> y = x.slogdet()
>>> print(y)
slogdet(sign=ivy.array(-1.), logabsdet=ivy.array(0.69314718))
>>> x = ivy.array([[1.2, 2.0, 3.1],
... [6.0, 5.2, 4.0],
... [9.0, 8.0, 7.0]])
>>> y = x.slogdet()
>>> print(y)
slogdet(sign=ivy.array(-1.), logabsdet=ivy.array(1.098611))
"""
return ivy.slogdet(self._data)
def solve(
self: ivy.Array,
x2: Union[ivy.Array, ivy.NativeArray],
/,
*,
adjoint: bool = False,
out: Optional[ivy.Array] = None,
) -> ivy.Array:
return ivy.solve(self._data, x2, adjoint=adjoint, out=out)
def svd(
self: ivy.Array,
/,
*,
compute_uv: bool = True,
full_matrices: bool = True,
) -> Union[ivy.Array, Tuple[ivy.Array, ...]]:
"""ivy.Array instance method variant of ivy.svf. This method simply
wraps the function, and so the docstring for ivy.svd also applies to
this method with minimal changes.
Parameters
----------
self
input array having shape ``(..., M, N)`` and whose innermost two
dimensions form matrices on which to perform singular value decomposition.
Should have a floating-point data type.
full_matrices
If ``True``, compute full-sized ``U`` and ``Vh``, such that ``U`` has shape
``(..., M, M)`` and ``Vh`` has shape ``(..., N, N)``. If ``False``,
compute on the leading ``K`` singular vectors, such that ``U`` has
shape ``(..., M, K)`` and ``Vh`` has shape ``(..., K, N)`` and where
``K = min(M, N)``. Default: ``True``.
compute_uv
If ``True`` then left and right singular vectors will be computed and
returned in ``U`` and ``Vh``, respectively. Otherwise, only the
singular values will be computed, which can be significantly faster.
.. note::
with backend set as torch, svd with still compute left and right singular
vectors irrespective of the value of compute_uv, however Ivy will still only
return the singular values.
Returns
-------
.. note::
once complex numbers are supported, each square matrix must be Hermitian.
ret
a namedtuple ``(U, S, Vh)``. More details in ivy.svd.
Each returned array must have the same floating-point data type as ``x``.
Examples
--------
With :class:`ivy.Array` input:
>>> x = ivy.random_normal(shape = (9, 6))
>>> U, S, Vh = x.svd()
>>> print(U.shape, S.shape, Vh.shape)
(9, 9) (6,) (6, 6)
With reconstruction from SVD, result is numerically close to x
>>> reconstructed_x = ivy.matmul(U[:,:6] * S, Vh)
>>> print((reconstructed_x - x > 1e-3).sum())
ivy.array(0)
>>> U, S, Vh = x.svd(full_matrices = False)
>>> print(U.shape, S.shape, Vh.shape)
(9, 6) (6,) (6, 6)
"""
return ivy.svd(self._data, compute_uv=compute_uv, full_matrices=full_matrices)
def svdvals(self: ivy.Array, /, *, out: Optional[ivy.Array] = None) -> ivy.Array:
return ivy.svdvals(self._data, out=out)
def tensordot(
self: ivy.Array,
x2: Union[ivy.Array, ivy.NativeArray],
/,
*,
axes: Union[int, Tuple[List[int], List[int]]] = 2,
out: Optional[ivy.Array] = None,
) -> ivy.Array:
return ivy.tensordot(self._data, x2, axes=axes, out=out)
def tensorsolve(
self: ivy.Array,
x2: Union[ivy.Array, ivy.NativeArray],
/,
*,
axes: Optional[Union[int, Tuple[List[int], List[int]]]] = None,
) -> Tuple[ivy.Array]:
return ivy.tensorsolve(self._data, x2, axes=axes)
def trace(
self: ivy.Array,
/,
*,
offset: int = 0,
axis1: int = 0,
axis2: int = 1,
out: Optional[ivy.Array] = None,
) -> ivy.Array:
"""ivy.Array instance method variant of ivy.trace. This method Returns
the sum along the specified diagonals of a matrix (or a stack of
matrices).
Parameters
----------
self
input array having shape ``(..., M, N)`` and whose innermost two
dimensions form ``MxN`` matrices. Should have a floating-point data type.
offset
Offset of the diagonal from the main diagonal. Can be both positive and
negative. Defaults to 0.
axis1
axis to be used as the first axis of the 2-D sub-arrays from which the
diagonals should be taken.
Defaults to ``0.`` .
axis2
axis to be used as the second axis of the 2-D sub-arrays from which the
diagonals should be taken.
Defaults to ``1.`` .
out
optional output array, for writing the result to. It must have a shape that
the inputs broadcast to.
Returns
-------
ret
an array containing the traces and whose shape is determined by removing
the last two dimensions and storing the traces in the last array dimension.
For example, if ``x`` has rank ``k`` and shape ``(I, J, K, ..., L, M, N)``,
then an output array has rank ``k-2`` and shape ``(I, J, K, ..., L)`` where
::
out[i, j, k, ..., l] = trace(a[i, j, k, ..., l, :, :])
The returned array must have the same data type as ``x``.
Examples
--------
>>> x = ivy.array([[1., 2.], [3., 4.]])
>>> y = x.trace()
>>> print(y)
ivy.array(5.)
>>> x = ivy.array([[1., 2., 4.], [6., 5., 3.]])
>>> y = ivy.Array.trace(x)
>>> print(y)
ivy.array(6.)
"""
return ivy.trace(self._data, offset=offset, axis1=axis1, axis2=axis2, out=out)
def vecdot(
self: ivy.Array,
x2: Union[ivy.Array, ivy.NativeArray],
/,
*,
axis: int = -1,
out: Optional[ivy.Array] = None,
) -> ivy.Array:
return ivy.vecdot(self._data, x2, axis=axis, out=out)
def vector_norm(
self: ivy.Array,
/,
*,
axis: Optional[Union[int, Sequence[int]]] = None,
keepdims: bool = False,
ord: Union[int, float, Literal[inf, -inf]] = 2,
dtype: Optional[Union[ivy.Dtype, ivy.NativeDtype]] = None,
out: Optional[ivy.Array] = None,
) -> ivy.Array:
"""ivy.Array instance method variant of ivy.vector_norm. This method
computes the vector norm of a vector (or batch of vectors).
Parameters
----------
self
Input array. Should have a floating-point data type.
axis
If an integer, ``axis`` specifies the axis (dimension) along which to
compute vector norms. If an n-tuple, ``axis`` specifies the axes
(dimensions) along which to compute batched vector norms. If ``None``,
the vector norm must be computed over all array values (i.e., equivalent
to computing the vector norm of a flattened array). Negative indices are
also supported. Default: ``None``.
keepdims
If ``True``, the axes (dimensions) specified by ``axis`` must be included
in the result as singleton dimensions, and, accordingly, the result must be
compatible with the input array (see :ref:`broadcasting`). Otherwise, if
``False``, the axes (dimensions) specified by ``axis`` must not be included
in the result.
Default: ``False``.
ord
order of the norm. The following mathematical norms are supported:
+------------------+----------------------------+
| ord | description |
+==================+============================+
| 1 | L1-norm (Manhattan) |
+------------------+----------------------------+
| 2 | L2-norm (Euclidean) |
+------------------+----------------------------+
| inf | infinity norm |
+------------------+----------------------------+
| (int,float >= 1) | p-norm |
+------------------+----------------------------+
The following non-mathematical "norms" are also supported:
+------------------+--------------------------------+
| ord | description |
+==================+================================+
| 0 | sum(a != 0) |
+------------------+--------------------------------+
| -inf | min(abs(a)) |
+------------------+--------------------------------+
| (int,float < 1) | sum(abs(a)**ord)**(1./ord) |
+------------------+--------------------------------+
Default: ``2``.
dtype
data type that may be used to perform the computation more precisely.
The input array ``self`` gets cast to ``dtype`` before the function's
computations.
out
optional output array, for writing the result to. It must have a shape
that the inputs broadcast to.
Returns
-------
ret
an array containing the vector norms. If ``axis`` is ``None``, the returned
array must be a zero-dimensional array containing a vector norm. If ``axis``
is a scalar value (``int`` or ``float``), the returned array must have a
rank which is one less than the rank of ``self``. If ``axis`` is a
``n``-tuple, the returned array must have a rank which is ``n`` less than
the rank of ``self``. The returned array must have a floating-point data
type determined by :ref:`type-promotion`.
Examples
--------
>>> x = ivy.array([1., 2., 3.])
>>> y = x.vector_norm()
>>> print(y)
ivy.array([3.7416575])
"""
return ivy.vector_norm(
self._data, axis=axis, keepdims=keepdims, ord=ord, dtype=dtype, out=out
)
def vector_to_skew_symmetric_matrix(
self: ivy.Array, /, *, out: Optional[ivy.Array] = None
) -> ivy.Array:
return ivy.vector_to_skew_symmetric_matrix(self._data, out=out)
def vander(
self: ivy.Array,
/,
*,
N: Optional[int] = None,
increasing: bool = False,
out: Optional[ivy.Array] = None,
) -> ivy.Array:
"""ivy.Array instance method variant of ivy.vander. This method Returns
the Vandermonde matrix of the input array.
Parameters
----------
self
1-D input array.
N
Number of columns in the output. If N is not specified,
a square array is returned (N = len(x))
increasing
Order of the powers of the columns. If True, the powers increase
from left to right, if False (the default) they are reversed.
out
optional output array, for writing the result to.
Returns
-------
ret
an array containing the Vandermonde matrix.
Examples
--------
>>> x = ivy.array([1, 2, 3, 5])
>>> ivy.vander(x)
ivy.array(
[[ 1, 1, 1, 1],
[ 8, 4, 2, 1],
[ 27, 9, 3, 1],
[125, 25, 5, 1]]
)
>>> x = ivy.array([1, 2, 3, 5])
>>> ivy.vander(x, N=3)
ivy.array(
[[ 1, 1, 1],
[ 4, 2, 1],
[ 9, 3, 1],
[25, 5, 1]]
)
>>> x = ivy.array([1, 2, 3, 5])
>>> ivy.vander(x, N=3, increasing=True)
ivy.array(
[[ 1, 1, 1],
[ 1, 2, 4],
[ 1, 3, 9],
[ 1, 5, 25]]
)
"""
return ivy.vander(self._data, N=N, increasing=increasing, out=out)
| ivy/ivy/data_classes/array/linear_algebra.py/0 | {
"file_path": "ivy/ivy/data_classes/array/linear_algebra.py",
"repo_id": "ivy",
"token_count": 19047
} | 9 |
# global
from typing import Optional, Union, List, Tuple, Dict, Sequence
from numbers import Number
import numpy as np
# local
import ivy
from ivy.data_classes.container.base import ContainerBase
class _ContainerWithCreation(ContainerBase):
@staticmethod
def _static_arange(
start: Union[Number, ivy.Container],
/,
stop: Optional[Union[Number, ivy.Container]] = None,
step: Union[Number, ivy.Container] = 1,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
*,
dtype: Optional[Union[ivy.Dtype, ivy.NativeDtype, ivy.Container]] = None,
device: Optional[Union[ivy.Device, ivy.NativeDevice, ivy.Container]] = None,
out: Optional[ivy.Container] = None,
) -> ivy.Container:
return ContainerBase.cont_multi_map_in_function(
"arange",
start,
stop=stop,
step=step,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
dtype=dtype,
device=device,
out=out,
)
@staticmethod
def _static_asarray(
x: Union[
ivy.Array,
ivy.NativeArray,
List[Number],
Tuple[Number],
np.ndarray,
ivy.Container,
],
/,
copy: Optional[Union[bool, ivy.Container]] = None,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
*,
dtype: Optional[Union[ivy.Dtype, ivy.NativeDtype, ivy.Container]] = None,
device: Optional[Union[ivy.Device, ivy.NativeDevice, ivy.Container]] = None,
out: Optional[ivy.Container] = None,
) -> ivy.Container:
"""ivy.Container static method variant of ivy.asarray. This method
simply wraps the function, and so the docstring for ivy.asarray also
applies to this method with minimal changes.
Parameters
----------
self
input data, in any form that can be converted to an array. This includes
lists, lists of tuples, tuples, tuples of tuples, tuples of lists and
ndarrays.
copy
boolean indicating whether or not to copy the input array.
If True, the function must always copy.
If False, the function must never copy and must
raise a ValueError in case a copy would be necessary.
If None, the function must reuse existing memory buffer if possible
and copy otherwise. Default: ``None``.
dtype
datatype, optional. Datatype is inferred from the input data.
device
device on which to place the created array. Default: ``None``.
out
optional output array, for writing the result to. It must have a shape that
the inputs broadcast to.
Returns
-------
ret
An array interpretation of ``self``.
Examples
--------
With :class:`ivy.Container` as input:
>>> x = ivy.Container(a = [(1,2),(3,4),(5,6)], b = ((1,2,3),(4,5,6)))
>>> ivy.asarray(x)
{
a: ivy.array([[1, 2],
[3, 4],
[5, 6]]),
b: ivy.array([[1, 2, 3],
[4, 5, 6]])
}
"""
return ContainerBase.cont_multi_map_in_function(
"asarray",
x,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
copy=copy,
dtype=dtype,
device=device,
out=out,
)
def asarray(
self: ivy.Container,
/,
copy: Optional[Union[bool, ivy.Container]] = None,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
*,
dtype: Optional[Union[ivy.Dtype, ivy.NativeDtype, ivy.Container]] = None,
device: Optional[Union[ivy.Device, ivy.NativeDevice, ivy.Container]] = None,
out: Optional[ivy.Container] = None,
) -> ivy.Container:
return self._static_asarray(
self,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
copy=copy,
dtype=dtype,
device=device,
out=out,
)
@staticmethod
def _static_zeros(
shape: Union[int, Sequence[int], ivy.Container],
/,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
*,
out: Optional[ivy.Container] = None,
dtype: Optional[Union[ivy.Dtype, ivy.NativeDtype, ivy.Container]] = None,
device: Optional[Union[ivy.Device, ivy.NativeDevice, ivy.Container]] = None,
) -> ivy.Container:
return ContainerBase.cont_multi_map_in_function(
"zeros",
shape,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
out=out,
dtype=dtype,
device=device,
)
@staticmethod
def _static_ones(
shape: Union[int, Sequence[int], ivy.Container],
/,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
*,
out: Optional[ivy.Container] = None,
dtype: Optional[Union[ivy.Dtype, ivy.NativeDtype, ivy.Container]] = None,
device: Optional[Union[ivy.Device, ivy.NativeDevice, ivy.Container]] = None,
) -> ivy.Container:
return ContainerBase.cont_multi_map_in_function(
"ones",
shape,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
out=out,
dtype=dtype,
device=device,
)
@staticmethod
def _static_empty(
shape: Union[int, Sequence[int], ivy.Container],
/,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
*,
dtype: Optional[Union[ivy.Dtype, ivy.NativeDtype, ivy.Container]] = None,
device: Optional[Union[ivy.Device, ivy.NativeDevice, ivy.Container]] = None,
out: Optional[ivy.Container] = None,
) -> ivy.Container:
return ContainerBase.cont_multi_map_in_function(
"empty",
shape,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
dtype=dtype,
device=device,
out=out,
)
@staticmethod
def _static_full(
shape: Union[ivy.Shape, ivy.NativeShape, ivy.Container],
fill_value: Union[float, bool, ivy.Container],
/,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
*,
dtype: Optional[Union[ivy.Dtype, ivy.NativeDtype, ivy.Container]] = None,
out: Optional[ivy.Container] = None,
device: Optional[Union[ivy.Device, ivy.NativeDevice, ivy.Container]] = None,
) -> ivy.Container:
return ContainerBase.cont_multi_map_in_function(
"full",
shape,
fill_value,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
out=out,
dtype=dtype,
device=device,
)
@staticmethod
def _static_full_like(
x: Union[ivy.Array, ivy.NativeArray, ivy.Container],
/,
fill_value: Union[int, float, ivy.Container],
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
*,
dtype: Optional[Union[ivy.Dtype, ivy.NativeDtype, ivy.Container]] = None,
device: Optional[Union[ivy.Device, ivy.NativeDevice, ivy.Container]] = None,
out: Optional[ivy.Container] = None,
) -> ivy.Container:
"""ivy.Container static method variant of ivy.full_like. This method
simply wraps the function, and so the docstring for ivy.full_like also
applies to this method with minimal changes.
Parameters
----------
self
input container.
fill_value
Scalar fill value
key_chains
The key-chains to apply or not apply the method to. Default is ``None``.
to_apply
If True, the method will be applied to key_chains, otherwise key_chains
will be skipped. Default is ``True``.
prune_unapplied
Whether to prune key_chains for which the function was not applied.
Default is ``False``.
map_sequences
Whether to also map method to sequences (lists, tuples).
Default is ``False``.
out
optional output container, for writing the result to. It must have a shape
that the inputs broadcast to.
dtype
output array data type. If ``dtype`` is `None`, the output array data type
must be inferred from ``self``. Default: ``None``.
device
device on which to place the created array. If ``device`` is ``None``, the
output array device must be inferred from ``self``. Default: ``None``.
Returns
-------
ret
an output container having the same data type as ``x`` and whose elements,
relative to ``x``, are shifted.
Examples
--------
With :class:`ivy.Container` input:
>>> x = ivy.Container(a = ivy.array([1,2,3]) ,b = ivy.array([4,5,6]))
>>> fill_value = 10
>>> y = ivy.Container.static_full_like(fill_value)
{
a: ivy.array([10, 10, 10]),
b: ivy.array([10, 10, 10])
}
>>> x = ivy.Container(a=ivy.array([1.2, 2.2324, 3.234]),
... b=ivy.array([4.123, 5.23, 6.23]))
>>> fill_value = 15.0
>>> y = ivy.Container.static_full_like(fill_value)
>>> print(y)
{
a: ivy.array([15., 15., 15.]),
b: ivy.array([15., 15., 15.])
}
"""
return ContainerBase.cont_multi_map_in_function(
"full_like",
x,
fill_value=fill_value,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
dtype=dtype,
device=device,
out=out,
)
def full_like(
self: ivy.Container,
/,
fill_value: Union[int, float, ivy.Container],
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
*,
out: Optional[ivy.Container] = None,
dtype: Optional[Union[ivy.Dtype, ivy.NativeDtype, ivy.Container]] = None,
device: Optional[Union[ivy.Device, ivy.NativeDevice, ivy.Container]] = None,
) -> ivy.Container:
"""ivy.Container instance method variant of ivy.full_like. This method
simply wraps the function, and so the docstring for ivy.full_like also
applies to this method with minimal changes.
Parameters
----------
self
input container.
fill_value
Scalar fill value
key_chains
The key-chains to apply or not apply the method to. Default is ``None``.
to_apply
If True, the method will be applied to key_chains, otherwise key_chains
will be skipped. Default is ``True``.
prune_unapplied
Whether to prune key_chains for which the function was not applied.
Default is ``False``.
map_sequences
Whether to also map method to sequences (lists, tuples).
Default is ``False``.
out
optional output container, for writing the result to. It must have a shape
that the inputs broadcast to.
dtype
output array data type. If ``dtype`` is `None`, the output array data type
must be inferred from ``self``. Default: ``None``.
device
device on which to place the created array. If ``device`` is ``None``, the
output array device must be inferred from ``self``. Default: ``None``.
Returns
-------
ret
an output container having the same data type as ``x`` and whose elements,
relative to ``x``, are shifted.
Examples
--------
With :class:`ivy.Container` input:
>>> x = ivy.Container(a = ivy.array([1,2,3]) ,b = ivy.array([4,5,6]))
>>> fill_value = 10
>>> y = x.full_like(fill_value)
{
a: ivy.array([10, 10, 10]),
b: ivy.array([10, 10, 10])
}
>>> x = ivy.Container(a=ivy.array([1.2,2.2324,3.234]),
... b=ivy.array([4.123,5.23,6.23]))
>>> fill_value = 15.0
>>> y = x.full_like(fill_value)
>>> print(y)
{
a: ivy.array([15., 15., 15.]),
b: ivy.array([15., 15., 15.])
}
"""
return self._static_full_like(
self,
fill_value,
key_chains,
to_apply,
prune_unapplied,
map_sequences,
out=out,
dtype=dtype,
device=device,
)
@staticmethod
def _static_ones_like(
x: Union[ivy.Array, ivy.NativeArray, ivy.Container],
/,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
*,
dtype: Optional[Union[ivy.Dtype, ivy.NativeDtype, ivy.Container]] = None,
device: Optional[Union[ivy.Device, ivy.NativeDevice, ivy.Container]] = None,
out: Optional[ivy.Container] = None,
) -> ivy.Container:
"""ivy.Container static method variant of ivy.ones_like. This method
simply wraps the function, and so the docstring for ivy.ones_like also
applies to this method with minimal changes.
Parameters
----------
x
input array from which to derive the output array shape.
key_chains
The key-chains to apply or not apply the method to. Default is ``None``.
to_apply
If True, the method will be applied to key_chains, otherwise key_chains
will be skipped. Default is ``True``.
prune_unapplied
Whether to prune key_chains for which the function was not applied.
Default is ``False``.
map_sequences
Whether to also map method to sequences (lists, tuples).
Default is ``False``.
dtype
output array data type. If ``dtype`` is ``None``, the output array data type
must be inferred from ``self``. Default ``None``.
device
device on which to place the created array. If device is ``None``, the
output array device must be inferred from ``self``. Default: ``None``.
out
optional output array, for writing the result to. It must have a shape that
the inputs broadcast to.
Returns
-------
ret
a container having the same shape as ``self`` and filled with ones.
"""
return ContainerBase.cont_multi_map_in_function(
"ones_like",
x,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
dtype=dtype,
device=device,
out=out,
)
def ones_like(
self: ivy.Container,
/,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
*,
dtype: Optional[Union[ivy.Dtype, ivy.NativeDtype, ivy.Container]] = None,
device: Optional[Union[ivy.Device, ivy.NativeDevice, ivy.Container]] = None,
out: Optional[ivy.Container] = None,
) -> ivy.Container:
"""ivy.Container instance method variant of ivy.ones_like. This method
simply wraps the function, and so the docstring for ivy.ones_like also
applies to this method with minimal changes.
Parameters
----------
self
input array from which to derive the output array shape.
key_chains
The key-chains to apply or not apply the method to. Default is ``None``.
to_apply
If True, the method will be applied to key_chains, otherwise key_chains
will be skipped. Default is ``True``.
prune_unapplied
Whether to prune key_chains for which the function was not applied.
Default is ``False``.
map_sequences
Whether to also map method to sequences (lists, tuples).
Default is ``False``.
dtype
output array data type. If ``dtype`` is ``None``, the output array data type
must be inferred from ``self``. Default ``None``.
device
device on which to place the created array. If device is ``None``, the
output array device must be inferred from ``self``. Default: ``None``.
out
optional output array, for writing the result to. It must have a shape that
the inputs broadcast to.
Returns
-------
ret
a container having the same shape as ``self`` and filled with ones.
"""
return self._static_ones_like(
self,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
dtype=dtype,
device=device,
out=out,
)
@staticmethod
def _static_zeros_like(
x: Union[ivy.Array, ivy.NativeArray, ivy.Container],
/,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
*,
dtype: Optional[Union[ivy.Dtype, ivy.NativeDtype, ivy.Container]] = None,
device: Optional[Union[ivy.Device, ivy.NativeDevice, ivy.Container]] = None,
out: Optional[ivy.Container] = None,
) -> ivy.Container:
"""ivy.Container static method variant of ivy.zeros_like. This method
simply wraps the function, and so the docstring for ivy.zeros_like also
applies to this method with minimal changes.
Parameters
----------
x
input array or container from which to derive the output container shape.
key_chains
The key-chains to apply or not apply the method to. Default is ``None``.
to_apply
If True, the method will be applied to key_chains, otherwise key_chains
will be skipped. Default is ``True``.
prune_unapplied
Whether to prune key_chains for which the function was not applied.
Default is ``False``.
map_sequences
Whether to also map method to sequences (lists, tuples).
Default is ``False``.
dtype
output array data type. If ``dtype`` is ``None``, the output container
data type must be inferred from ``self``. Default ``None``.
device
device on which to place the created array. If device is ``None``, the
output container device must be inferred from ``self``. Default: ``None``.
out
optional output container, for writing the result to. It must have a
shape that the inputs broadcast to.
Returns
-------
ret
an container having the same shape as ``x`` and filled with ``zeros``.
"""
return ContainerBase.cont_multi_map_in_function(
"zeros_like",
x,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
dtype=dtype,
device=device,
out=out,
)
def zeros_like(
self: ivy.Container,
/,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
*,
dtype: Optional[Union[ivy.Dtype, ivy.NativeDtype, ivy.Container]] = None,
device: Optional[Union[ivy.Device, ivy.NativeDevice, ivy.Container]] = None,
out: Optional[ivy.Container] = None,
) -> ivy.Container:
"""ivy.Container instance method variant of ivy.zeros_like. This method
simply wraps the function, and so the docstring for ivy.zeros_like also
applies to this method with minimal changes.
Parameters
----------
self
input array or container from which to derive the output container shape.
key_chains
The key-chains to apply or not apply the method to. Default is ``None``.
to_apply
If True, the method will be applied to key_chains, otherwise key_chains
will be skipped. Default is ``True``.
prune_unapplied
Whether to prune key_chains for which the function was not applied.
Default is ``False``.
map_sequences
Whether to also map method to sequences (lists, tuples).
Default is ``False``.
dtype
output array data type. If ``dtype`` is ``None``, the output container
data type must be inferred from ``self``. Default: ``None``.
device
device on which to place the created array. If device is ``None``, the
output container device must be inferred from ``self``. Default: ``None``.
out
optional output container, for writing the result to. It must have a
shape that the inputs broadcast to.
Returns
-------
ret
an container having the same shape as ``x`` and filled with ``zeros``.
"""
return self._static_zeros_like(
self,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
out=out,
dtype=dtype,
device=device,
)
@staticmethod
def _static_tril(
x: Union[ivy.Array, ivy.NativeArray, ivy.Container],
/,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
*,
k: Union[int, ivy.Container] = 0,
out: Optional[ivy.Container] = None,
) -> ivy.Container:
return ContainerBase.cont_multi_map_in_function(
"tril",
x,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
k=k,
out=out,
)
def tril(
self: ivy.Container,
/,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
*,
k: Union[int, ivy.Container] = 0,
out: Optional[ivy.Container] = None,
) -> ivy.Container:
return self._static_tril(
self,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
k=k,
out=out,
)
@staticmethod
def _static_triu(
x: Union[ivy.Array, ivy.NativeArray, ivy.Container],
/,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
*,
k: Union[int, ivy.Container] = 0,
out: Optional[ivy.Container] = None,
) -> ivy.Container:
return ContainerBase.cont_multi_map_in_function(
"triu",
x,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
k=k,
out=out,
)
def triu(
self: ivy.Container,
/,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
*,
k: Union[int, ivy.Container] = 0,
out: Optional[ivy.Container] = None,
) -> ivy.Container:
return self._static_triu(
self,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
k=k,
out=out,
)
@staticmethod
def _static_empty_like(
x: Union[ivy.Array, ivy.NativeArray, ivy.Container],
/,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
*,
dtype: Optional[Union[ivy.Dtype, ivy.NativeDtype, ivy.Container]] = None,
device: Optional[Union[ivy.Device, ivy.NativeDevice, ivy.Container]] = None,
out: Optional[ivy.Container] = None,
) -> ivy.Container:
return ContainerBase.cont_multi_map_in_function(
"empty_like",
x,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
dtype=dtype,
device=device,
out=out,
)
def empty_like(
self: ivy.Container,
/,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
*,
dtype: Optional[Union[ivy.Dtype, ivy.NativeDtype, ivy.Container]] = None,
device: Optional[Union[ivy.Device, ivy.NativeDevice, ivy.Container]] = None,
out: Optional[ivy.Container] = None,
) -> ivy.Container:
return self._static_empty_like(
self,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
dtype=dtype,
device=device,
out=out,
)
@staticmethod
def _static_eye(
n_rows: Union[int, ivy.Container],
n_cols: Optional[Union[int, ivy.Container]] = None,
/,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
*,
k: Union[int, ivy.Container] = 0,
batch_shape: Optional[Union[int, Sequence[int], ivy.Container]] = None,
dtype: Optional[Union[ivy.Dtype, ivy.NativeDtype, ivy.Container]] = None,
device: Optional[Union[ivy.Device, ivy.NativeDevice, ivy.Container]] = None,
out: Optional[ivy.Container] = None,
) -> ivy.Container:
return ContainerBase.cont_multi_map_in_function(
"eye",
n_rows,
n_cols,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
k=k,
batch_shape=batch_shape,
dtype=dtype,
device=device,
out=out,
)
@staticmethod
def _static_linspace(
start: Union[ivy.Array, ivy.NativeArray, float, ivy.Container],
stop: Union[ivy.Array, ivy.NativeArray, float, ivy.Container],
/,
num: Union[int, ivy.Container],
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
*,
axis: Optional[Union[int, ivy.Container]] = None,
endpoint: Union[bool, ivy.Container] = True,
dtype: Optional[Union[ivy.Dtype, ivy.NativeDtype, ivy.Container]] = None,
device: Optional[Union[ivy.Device, ivy.NativeDevice, ivy.Container]] = None,
out: Optional[ivy.Container] = None,
) -> ivy.Container:
return ContainerBase.cont_multi_map_in_function(
"linspace",
start,
stop,
num,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
axis=axis,
endpoint=endpoint,
dtype=dtype,
device=device,
out=out,
)
def linspace(
self: ivy.Container,
stop: Union[ivy.Array, ivy.NativeArray, float, ivy.Container],
/,
num: Union[int, ivy.Container],
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
*,
axis: Optional[Union[int, ivy.Container]] = None,
endpoint: Union[bool, ivy.Container] = True,
dtype: Optional[Union[ivy.Dtype, ivy.NativeDtype, ivy.Container]] = None,
device: Optional[Union[ivy.Device, ivy.NativeDevice, ivy.Container]] = None,
out: Optional[ivy.Container] = None,
) -> ivy.Container:
return self._static_linspace(
self,
stop,
num,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
axis=axis,
endpoint=endpoint,
dtype=dtype,
device=device,
out=out,
)
@staticmethod
def _static_meshgrid(
*arrays: Union[
ivy.Array, ivy.NativeArray, List[Number], Tuple[Number], ivy.Container
],
sparse: Union[bool, ivy.Container] = False,
indexing: Union[str, ivy.Container] = "xy",
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
) -> ivy.Container:
return ContainerBase.cont_multi_map_in_function(
"meshgrid",
*arrays,
sparse=sparse,
indexing=indexing,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
)
def meshgrid(
self: ivy.Container,
*arrays: Union[
ivy.Array, ivy.NativeArray, List[Number], Tuple[Number], ivy.Container
],
sparse: Union[bool, ivy.Container] = False,
indexing: Union[str, ivy.Container] = "xy",
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
) -> ivy.Container:
return self._static_meshgrid(
self,
*arrays,
sparse=sparse,
indexing=indexing,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
)
@staticmethod
def _static_from_dlpack(
x: Union[ivy.Array, ivy.NativeArray, ivy.Container],
/,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
*,
out: Optional[ivy.Container] = None,
) -> ivy.Container:
return ContainerBase.cont_multi_map_in_function(
"from_dlpack",
x,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
out=out,
)
def from_dlpack(
self: ivy.Container,
/,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
*,
out: Optional[ivy.Container] = None,
) -> ivy.Container:
return self._static_from_dlpack(
self,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
out=out,
)
@staticmethod
def _static_copy_array(
x: Union[ivy.Array, ivy.NativeArray, ivy.Container],
/,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
*,
to_ivy_array: Union[bool, ivy.Container] = True,
out: Optional[ivy.Container] = None,
) -> ivy.Container:
return ContainerBase.cont_multi_map_in_function(
"copy_array",
x,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
to_ivy_array=to_ivy_array,
out=out,
)
def copy_array(
self: ivy.Container,
/,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
*,
to_ivy_array: Union[bool, ivy.Container] = True,
out: Optional[ivy.Container] = None,
) -> ivy.Container:
return self._static_copy_array(
self,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
to_ivy_array=to_ivy_array,
out=out,
)
@staticmethod
def _static_native_array(
x: Union[
ivy.Array,
ivy.NativeArray,
List[Number],
Tuple[Number],
np.ndarray,
ivy.Container,
],
/,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
*,
dtype: Optional[Union[ivy.Dtype, ivy.NativeDtype, ivy.Container]] = None,
device: Optional[Union[ivy.Device, ivy.NativeDevice, ivy.Container]] = None,
) -> ivy.Container:
return ContainerBase.cont_multi_map_in_function(
"native_array",
x,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
dtype=dtype,
device=device,
)
def native_array(
self: ivy.Container,
/,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
*,
dtype: Optional[Union[ivy.Dtype, ivy.NativeDtype, ivy.Container]] = None,
device: Optional[Union[ivy.Device, ivy.NativeDevice, ivy.Container]] = None,
) -> ivy.Container:
return self._static_native_array(
self,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
dtype=dtype,
device=device,
)
@staticmethod
def _static_logspace(
start: Union[ivy.Array, ivy.NativeArray, float, ivy.Container],
stop: Union[ivy.Array, ivy.NativeArray, float, ivy.Container],
/,
num: Union[int, ivy.Container],
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
*,
base: Union[float, ivy.Container] = 10.0,
axis: Union[int, ivy.Container] = 0,
endpoint: Union[bool, ivy.Container] = True,
dtype: Optional[Union[ivy.Dtype, ivy.NativeDtype, ivy.Container]] = None,
device: Optional[Union[ivy.Device, ivy.NativeDevice, ivy.Container]] = None,
out: Optional[ivy.Container] = None,
) -> ivy.Container:
"""ivy.Container static method variant of ivy.logspace. This method
simply wraps the function, and so the docstring for ivy.logspace also
applies to this method with minimal changes.
Parameters
----------
start
Container for first value in the range in log space.
stop
Container for last value in the range in log space.
num
Number of values to generate.
base
The base of the log space. Default is 10.0
axis
Axis along which the operation is performed. Relevant only if values in
start or stop containers are array-like. Default is 0.
endpoint
If True, stop is the last sample. Otherwise, it is not included. Default is
True.
dtype
The data type of the output tensor. If None, the dtype of on_value is used
or if that is None, the dtype of off_value is used, or if that is None,
defaults to float32. Default is None.
device
device on which to create the array 'cuda:0', 'cuda:1', 'cpu' etc. Default
is None.
out
optional output array, for writing the result to. It must have a shape that
the inputs broadcast to. Default is None.
Returns
-------
ret
a container having the same shape as ``start`` and filled with tensor of
evenly-spaced values in log space.
Examples
--------
>>> import ivy.container.creation.static_logspace as static_logspace
>>> x = ivy.Container(a = 1, b = 0)
>>> y = ivy.Container(a = 4, b = 1)
>>> z = static_logspace(x, y, 4)
{
a: ivy.array([10., 100., 1000., 10000.]),
b: ivy.array([ 1., 2.15443469, 4.64158883, 10.])
}
"""
return ContainerBase.cont_multi_map_in_function(
"logspace",
start,
stop,
num=num,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
base=base,
axis=axis,
endpoint=endpoint,
dtype=dtype,
device=device,
out=out,
)
def logspace(
self: ivy.Container,
stop: Union[ivy.Array, ivy.NativeArray, float, ivy.Container],
/,
num: Union[int, ivy.Container],
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
*,
base: Union[float, ivy.Container] = 10.0,
axis: Union[int, ivy.Container] = None,
endpoint: Union[bool, ivy.Container] = True,
dtype: Optional[Union[ivy.Dtype, ivy.NativeDtype, ivy.Container]] = None,
device: Optional[Union[ivy.Device, ivy.NativeDevice, ivy.Container]] = None,
out: Optional[ivy.Container] = None,
) -> ivy.Container:
"""ivy.Container instance method variant of ivy.logspace. This method
simply wraps the function, and so the docstring for ivy.logspace also
applies to this method with minimal changes.
Parameters
----------
self
Container for first value in the range in log space.
stop
Container for last value in the range in log space.
num
Number of values to generate.
base
The base of the log space. Default is 10.0
axis
Axis along which the operation is performed. Relevant only if values in
start or stop containers are array-like. Default is 0.
endpoint
If True, stop is the last sample. Otherwise, it is not included. Default is
True.
dtype
The data type of the output tensor. If None, the dtype of on_value is used
or if that is None, the dtype of off_value is used, or if that is None,
defaults to float32. Default is None.
device
device on which to create the array 'cuda:0', 'cuda:1', 'cpu' etc. Default
is None.
out
optional output array, for writing the result to. It must have a shape that
the inputs broadcast to. Default is None.
Returns
-------
ret
a container having the same shape as ``self`` and filled with tensor of
evenly-spaced values in log space.
Examples
--------
>>> x = ivy.Container(a = 1, b = 0)
>>> y = ivy.Container(a = 4, b = 1)
>>> z = x.logspace(y, 4)
{
a: ivy.array([10., 100., 1000., 10000.]),
b: ivy.array([ 1., 2.15443469, 4.64158883, 10.])
}
>>> x = ivy.Container(a = 1, b = 0)
>>> y = ivy.Container(a = 4, b = 1)
>>> z = ivy.logspace(x, y, 4)
{
a: ivy.array([10., 100., 1000., 10000.]),
b: ivy.array([ 1., 2.15443469, 4.64158883, 10.])
}
>>> u = ivy.Container(c = 0, d = 0)
>>> v = ivy.Container(c = 1, d = 2)
>>> x = ivy.Container(a = 1, b = u)
>>> y = ivy.Container(a = 4, b = v)
>>> z = x.logspace(y, 4)
{
a: ivy.array([10., 100., 1000., 10000.]),
b: {
c: ivy.array([ 1., 2.15443469, 4.64158883, 10.])
d: ivy.array([ 1., 4.64158883, 21.5443469, 100.])
}
}
"""
return self._static_logspace(
self,
stop,
num=num,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
base=base,
axis=axis,
endpoint=endpoint,
dtype=dtype,
device=device,
out=out,
)
@staticmethod
def _static_one_hot(
indices: ivy.Container,
depth: Union[int, ivy.Container],
/,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
*,
on_value: Optional[Union[Number, ivy.Container]] = None,
off_value: Optional[Union[Number, ivy.Container]] = None,
axis: Optional[Union[int, ivy.Container]] = None,
dtype: Optional[Union[ivy.Dtype, ivy.NativeDtype, ivy.Container]] = None,
device: Optional[Union[ivy.Device, ivy.NativeDevice, ivy.Container]] = None,
out: Optional[ivy.Container] = None,
) -> ivy.Container:
"""ivy.Container static method variant of ivy.one_hot. This method
simply wraps the function, and so the docstring for ivy.one_hot also
applies to this method with minimal changes.
Parameters
----------
indices
Indices for where the ones should be scattered *[batch_shape, dim]*
depth
Scalar defining the depth of the one-hot dimension.
on_value
Value to fill in output when indices[j] = i. If None, defaults to 1.
off_value
Value to fill in output when indices[j] != i. If None, defaults to 0.
axis
Axis to scatter on. The default is ``-1``, a new inner-most axis is created.
dtype
The data type of the output tensor. If None, defaults to the on_value dtype
or the off_value dtype. If both are None, defaults to float32.
key_chains
The key-chains to apply or not apply the method to. Default is ``None``.
to_apply
If True, the method will be applied to key_chains, otherwise key_chains will
be skipped. Default is ``True``.
prune_unapplied
Whether to prune key_chains for which the function was not applied. Default
is False.
map_sequences
Whether to also map method to sequences (lists, tuples).
Default is ``False``.
Returns
-------
ret
container with tensors of zeros with the same shape and type as the inputs,
unless dtype provided which overrides.
Examples
--------
With :class:`ivy.Container` inputs:
>>> x = ivy.Container(a=ivy.array([1, 2]), \
b=ivy.array([3, 1]), c=ivy.array([2, 3]))
>>> y = 5
>>> z = ivy.Container.static_one_hot(x, y)
>>> print(z)
{
a: ivy.array([[0., 1., 0., 0., 0.],
[0., 0., 1., 0., 0.]]),
b: ivy.array([[0., 0., 0., 1., 0.],
[0., 1., 0., 0., 0.]]),
c: ivy.array([[0., 0., 1., 0., 0.],
[0., 0., 0., 1., 0.]])
}
>>> x = ivy.Container(a=ivy.array([1, 2]), \
b=ivy.array([]), c=ivy.native_array([4]))
>>> y = 5
>>> z = ivy.Container.static_one_hot(x, y)
>>> print(z)
{
a: ivy.array([[0., 1., 0., 0., 0.],
[0., 0., 1., 0., 0.]]),
b: ivy.array([], shape=(0, 5)),
c: ivy.array([[0., 0., 0., 0., 1.]])
}
"""
return ContainerBase.cont_multi_map_in_function(
"one_hot",
indices,
depth,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
on_value=on_value,
off_value=off_value,
axis=axis,
dtype=dtype,
device=device,
out=out,
)
def one_hot(
self: ivy.Container,
depth: Union[int, ivy.Container],
/,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
*,
on_value: Optional[Union[Number, ivy.Container]] = None,
off_value: Optional[Union[Number, ivy.Container]] = None,
axis: Optional[Union[int, ivy.Container]] = None,
dtype: Optional[Union[ivy.Dtype, ivy.NativeDtype, ivy.Container]] = None,
device: Optional[Union[ivy.Device, ivy.NativeDevice, ivy.Container]] = None,
out: Optional[ivy.Container] = None,
) -> ivy.Container:
"""ivy.Container instance method variant of ivy.one_hot. This method
simply wraps the function, and so the docstring for ivy.one_hot also
applies to this method with minimal changes.
Parameters
----------
self
Indices for where the ones should be scattered *[batch_shape, dim]*
depth
Scalar defining the depth of the one-hot dimension.
on_value
Value to fill in output when indices[j] == i. If None, defaults to 1.
off_value
Value to fill in output when indices[j] != i. If None, defaults to 0.
axis
Axis to scatter on. The default is ``-1``, a new inner-most axis is created.
dtype
The dtype of the returned tensor. If None, defaults to the on_value dtype
or the off_value dtype. If both are None, defaults to float32.
key_chains
The key-chains to apply or not apply the method to. Default is ``None``.
to_apply
If True, the method will be applied to key_chains, otherwise key_chains will
be skipped. Default is ``True``.
prune_unapplied
Whether to prune key_chains for which the function was not applied. Default
is False.
map_sequences
Whether to also map method to sequences (lists, tuples).
Default is ``False``.
out
optional output container, for writing the result to. It must have a
shape that the inputs broadcast to.
Returns
-------
ret
container with tensors of zeros with the same shape and type as the inputs,
unless dtype provided which overrides.
Examples
--------
With :class:`ivy.Container` input:
>>> x = ivy.Container(a=ivy.array([1, 2]), \
b=ivy.array([3, 1]), c=ivy.array([2, 3]))
>>> y = 5
>>> z = x.one_hot(y)
>>> print(z)
{
a: ivy.array([[0., 1., 0., 0., 0.],
[0., 0., 1., 0., 0.]]),
b: ivy.array([[0., 0., 0., 1., 0.],
[0., 1., 0., 0., 0.]]),
c: ivy.array([[0., 0., 1., 0., 0.],
[0., 0., 0., 1., 0.]])
}
>>> x = ivy.Container(a=ivy.array([1, 2]), \
b=ivy.array([]), c=ivy.native_array([4]))
>>> y = 5
>>> z = x.one_hot(y)
>>> print(z)
{
a: ivy.array([[0., 1., 0., 0., 0.],
[0., 0., 1., 0., 0.]]),
b: ivy.array([], shape=(0, 5)),
c: ivy.array([[0., 0., 0., 0., 1.]])
}
"""
return self._static_one_hot(
self,
depth,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
on_value=on_value,
off_value=off_value,
axis=axis,
dtype=dtype,
device=device,
out=out,
)
@staticmethod
def static_frombuffer(
buffer: ivy.Container,
dtype: Optional[Union[ivy.Dtype, ivy.NativeDtype, ivy.Container]] = float,
count: Optional[Union[int, ivy.Container]] = -1,
offset: Optional[Union[int, ivy.Container]] = 0,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
) -> ivy.Container:
r"""ivy.Container static method variant of ivy.frombuffer. This method
simply wraps the function, and so the docstring for ivy.frombuffer also
applies to this method with minimal changes.
Parameters
----------
buffer
An object that exposes the buffer interface.
dtype
Data-type of the returned array; default: float.
count
Number of items to read. -1 means all data in the buffer.
offset
Start reading the buffer from this offset (in bytes); default: 0.
key_chains
The key-chains to apply or not apply the method to. Default is ``None``.
to_apply
If True, the method will be applied to key_chains, otherwise key_chains will
be skipped. Default is ``True``.
prune_unapplied
Whether to prune key_chains for which the function was not applied. Default
is False.
map_sequences
Whether to also map method to sequences (lists, tuples).
Default is ``False``.
Returns
-------
out
1-dimensional array.
Examples
--------
With :class:`ivy.Container` inputs:
>>> x = ivy.Container(
... a = b'\x00\x00\x00\x00\x00\x00\xf0?',
... b = b'\x00\x00\x00\x00\x00\x00\xf0?\x00\x00\x00\x00\x00\x00\x00@'
... )
>>> y = ivy.Container.static_frombuffer(x)
>>> print(y)
{
a: ivy.array([1.]),
b: ivy.array([1., 2.])
}
>>> x = ivy.Container(
... a = b'\x01\x02\x03\x04',
... b = b'\x05\x04\x03\x03\x02'
... )
>>> y = ivy.Container.static_frombuffer(x, dtype=ivy.int8, count=3, offset=1)
>>> print(y)
{
a: ivy.array([2, 3, 4]),
b: ivy.array([4, 3, 3])
}
"""
return ContainerBase.cont_multi_map_in_function(
"frombuffer",
buffer,
dtype=dtype,
count=count,
offset=offset,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
)
def frombuffer(
self: ivy.Container,
dtype: Optional[Union[ivy.Dtype, ivy.NativeDtype]] = float,
count: Optional[Union[int, ivy.Container]] = -1,
offset: Optional[Union[int, ivy.Container]] = 0,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
) -> ivy.Container:
r"""ivy.Container instance method variant of ivy.frombuffer. This method
simply wraps the function, and so the docstring for ivy.frombuffer also
applies to this method with minimal changes.
Parameters
----------
self
An object that exposes the buffer interface.
dtype
Data-type of the returned array; default: float.
count
Number of items to read. -1 means all data in the buffer.
offset
Start reading the buffer from this offset (in bytes); default: 0.
key_chains
The key-chains to apply or not apply the method to. Default is ``None``.
to_apply
If True, the method will be applied to key_chains, otherwise key_chains will
be skipped. Default is ``True``.
prune_unapplied
Whether to prune key_chains for which the function was not applied. Default
is False.
map_sequences
Whether to also map method to sequences (lists, tuples).
Default is ``False``.
Returns
-------
out
1-dimensional array.
Examples
--------
With :class:`ivy.Container` inputs:
>>> x = ivy.Container(
... a = b'\x00\x00\x00\x00\x00\x00\xf0?',
... b = b'\x00\x00\x00\x00\x00\x00\xf0?\x00\x00\x00\x00\x00\x00\x00@'
... )
>>> y = x.frombuffer(dtype=ivy.float64)
>>> print(y)
{
a: ivy.array([1.]),
b: ivy.array([1., 2.])
}
>>> x = ivy.Container(
... a = b'\x01\x02\x03\x04',
... b = b'\x05\x04\x03\x03\x02'
... )
>>> y = x.frombuffer(dtype=ivy.int8, count=3, offset=1)
>>> print(y)
{
a: ivy.array([2, 3, 4]),
b: ivy.array([4, 3, 3])
}
"""
return self.static_frombuffer(
self,
dtype=dtype,
count=count,
offset=offset,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
)
@staticmethod
def static_triu_indices(
n_rows: Union[int, ivy.Container],
n_cols: Optional[Union[int, ivy.Container]] = None,
k: Union[int, ivy.Container] = 0,
/,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
*,
device: Optional[Union[ivy.Device, ivy.NativeDevice, ivy.Container]] = None,
out: Optional[Union[Tuple[ivy.Array], ivy.Container]] = None,
) -> ivy.Container:
return ContainerBase.cont_multi_map_in_function(
"triu_indices",
n_rows,
n_cols,
k,
key_chains,
to_apply,
prune_unapplied,
map_sequences,
device=device,
out=out,
)
def triu_indices(
self: ivy.Container,
n_rows: Union[int, ivy.Container],
n_cols: Optional[Union[int, ivy.Container]] = None,
k: Union[int, ivy.Container] = 0,
/,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
*,
device: Optional[Union[ivy.Device, ivy.NativeDevice, ivy.Container]] = None,
out: Optional[Union[Tuple[ivy.Array], ivy.Container]] = None,
) -> ivy.Container:
return self.static_triu_indices(
n_rows,
n_cols,
k,
key_chains,
to_apply,
prune_unapplied,
map_sequences,
device=device,
out=out,
)
| ivy/ivy/data_classes/container/creation.py/0 | {
"file_path": "ivy/ivy/data_classes/container/creation.py",
"repo_id": "ivy",
"token_count": 30151
} | 10 |
# global
from typing import Optional, Union, List, Dict
# local
import ivy
from ivy.data_classes.container.base import ContainerBase
class _ContainerWithLossesExperimental(ContainerBase):
@staticmethod
def _static_l1_loss(
input: Union[ivy.Container, ivy.Array, ivy.NativeArray],
target: Union[ivy.Container, ivy.Array, ivy.NativeArray],
/,
*,
reduction: Optional[Union[str, ivy.Container]] = "mean",
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
out: Optional[ivy.Container] = None,
) -> ivy.Container:
"""ivy.Container static method variant of ivy.l1_loss. This method
simply wraps the function, and so the docstring for ivy.l1_loss also
applies to this method with minimal changes.
Parameters
----------
input
input array or container.
target
input array or container containing the targeted values.
reduction
``'mean'``: The output will be averaged.
``'sum'``: The output will be summed.
``'none'``: No reduction will be applied to the output. Default: ``'mean'``.
key_chains
The key-chains to apply or not apply the method to. Default is ``None``.
to_apply
If input, the method will be applied to key_chains, otherwise key_chains
will be skipped. Default is ``input``.
prune_unapplied
Whether to prune key_chains for which the function was not applied.
Default is ``False``.
map_sequences
Whether to also map method to sequences (lists, tuples).
Default is ``False``.
out
optional output container, for writing the result to. It must have a shape
that the inputs broadcast to.
Returns
-------
ret
The L1 loss between the input array and the targeted values.
Examples
--------
With :class:`ivy.Container` inputs:
>>> x = ivy.Container(a=ivy.array([1, 2, 3]), b=ivy.array([4, 5, 6]))
>>> y = ivy.Container(a=ivy.array([2, 2, 2]), b=ivy.array([5, 5, 5]))
>>> z = ivy.Container.static_l1_loss(x, y)
>>> print(z)
{
a: ivy.array(1.),
b: ivy.array(0.)
}
With a mix of :class:`ivy.Array` and :class:`ivy.Container` inputs:
>>> x = ivy.array([1, 2, 3])
>>> y = ivy.Container(a=ivy.array([2, 2, 2]), b=ivy.array([5, 5, 5]))
>>> z = ivy.Container.static_l1_loss(x, y)
>>> print(z)
{
a: ivy.array(1.),
b: ivy.array(4.)
}
"""
return ContainerBase.cont_multi_map_in_function(
"l1_loss",
input,
target,
reduction=reduction,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
out=out,
)
def l1_loss(
self: ivy.Container,
target: Union[ivy.Container, ivy.Array, ivy.NativeArray],
/,
*,
reduction: Optional[Union[str, ivy.Container]] = "mean",
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
out: Optional[ivy.Container] = None,
) -> ivy.Container:
"""ivy.Container instance method variant of ivy.l1_loss. This method
simply wraps the function, and so the docstring for ivy.l1_loss also
applies to this method with minimal changes.
Parameters
----------
self
input container.
target
input array or container containing the targeticted values.
reduction
``'mean'``: The output will be averaged.
``'sum'``: The output will be summed.
``'none'``: No reduction will be applied to the output. Default: ``'mean'``.
key_chains
The key-chains to apply or not apply the method to. Default is ``None``.
to_apply
If input, the method will be applied to key_chains, otherwise key_chains
will be skipped. Default is ``input``.
prune_unapplied
Whether to prune key_chains for which the function was not applied.
Default is ``False``.
map_sequences
Whether to also map method to sequences (lists, tuples).
Default is ``False``.
out
optional output container, for writing the result to. It must have a shape
that the inputs broadcast to.
Returns
-------
ret
The L1 loss between the input array and the targeticted values.
Examples
--------
>>> x = ivy.Container(a=ivy.array([1, 2, 3]), b=ivy.array([4, 5, 6]))
>>> y = ivy.Container(a=ivy.array([2, 2, 2]), b=ivy.array([5, 5, 5]))
>>> z = x.l1_loss(y)
>>> print(z)
{
a: ivy.array(0.),
b: ivy.array(0.)
}
"""
return self._static_l1_loss(
self,
target,
reduction=reduction,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
out=out,
)
@staticmethod
def _static_log_poisson_loss(
input: Union[ivy.Container, ivy.Array, ivy.NativeArray],
target: Union[ivy.Container, ivy.Array, ivy.NativeArray],
/,
*,
compute_full_loss: bool = False,
axis: int = -1,
reduction: Optional[Union[str, ivy.Container]] = "mean",
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
out: Optional[ivy.Container] = None,
) -> ivy.Container:
"""ivy.Container static method variant of ivy.log_poisson_loss. This
method simply wraps the function, and so the docstring for
ivy.log_poisson_loss also applies to this method with minimal changes.
Parameters
----------
input
input array or container.
target
input array or container containing the targeted values.
compute_full_loss
whether to compute the full loss. If false, a constant term is dropped
in favor of more efficient optimization. Default: ``False``.
axis
the axis along which to compute the log-likelihood loss. If axis is ``-1``,
the log-likelihood loss will be computed along the last dimension.
Default: ``-1``.
reduction
``'mean'``: The output will be averaged.
``'sum'``: The output will be summed.
``'none'``: No reduction will be applied to the output. Default: ``'none'``.
key_chains
The key-chains to apply or not apply the method to. Default is ``None``.
to_apply
If input, the method will be applied to key_chains, otherwise key_chains
will be skipped. Default is ``input``.
prune_unapplied
Whether to prune key_chains for which the function was not applied.
Default is ``False``.
map_sequences
Whether to also map method to sequences (lists, tuples).
Default is ``False``.
out
optional output container, for writing the result to. It must have a shape
that the inputs broadcast to.
Returns
-------
ret
The L1 loss between the input array and the targeted values.
Examples
--------
With :class:`ivy.Container` inputs:
>>> x = ivy.Container(a=ivy.array([1, 2, 3]), b=ivy.array([4, 5, 6]))
>>> y = ivy.Container(a=ivy.array([2, 2, 2]), b=ivy.array([5, 5, 5]))
>>> z = ivy.Container.static_log_poisson_loss(x, y, reduction='mean')
>>> print(z)
{
a: ivy.array(1.),
b: ivy.array(0.)
}
With a mix of :class:`ivy.Array` and :class:`ivy.Container` inputs:
>>> x = ivy.array([1, 2, 3])
>>> y = ivy.Container(a=ivy.array([2, 2, 2]), b=ivy.array([5, 5, 5]))
>>> z = ivy.Container.static_log_poisson_loss(x, y, reduction='mean')
>>> print(z)
{
a: ivy.array(1.),
b: ivy.array(4.)
}
"""
return ContainerBase.cont_multi_map_in_function(
"log_poisson_loss",
input,
target,
compute_full_loss=compute_full_loss,
axis=axis,
reduction=reduction,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
out=out,
)
def log_poisson_loss(
self: ivy.Container,
target: Union[ivy.Container, ivy.Array, ivy.NativeArray],
/,
*,
compute_full_loss: bool = False,
axis: int = -1,
reduction: Optional[Union[str, ivy.Container]] = "mean",
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
out: Optional[ivy.Container] = None,
) -> ivy.Container:
"""ivy.Container instance method variant of ivy.log_poisson_loss. This
method simply wraps the function, and so the docstring for
ivy.log_poisson_loss also applies to this method with minimal changes.
Parameters
----------
self
input container.
target
input array or container containing the targeticted values.
compute_full_loss
whether to compute the full loss. If false, a constant term is dropped
in favor of more efficient optimization. Default: ``False``.
axis
the axis along which to compute the log-likelihood loss. If axis is ``-1``,
the log-likelihood loss will be computed along the last dimension.
Default: ``-1``.
reduction
``'mean'``: The output will be averaged.
``'sum'``: The output will be summed.
``'none'``: No reduction will be applied to the output. Default: ``'none'``.
key_chains
The key-chains to apply or not apply the method to. Default is ``None``.
to_apply
If input, the method will be applied to key_chains, otherwise key_chains
will be skipped. Default is ``input``.
prune_unapplied
Whether to prune key_chains for which the function was not applied.
Default is ``False``.
map_sequences
Whether to also map method to sequences (lists, tuples).
Default is ``False``.
out
optional output container, for writing the result to. It must have a shape
that the inputs broadcast to.
Returns
-------
ret
The L1 loss between the input array and the targeticted values.
Examples
--------
>>> x = ivy.Container(a=ivy.array([1, 2, 3]), b=ivy.array([4, 5, 6]))
>>> y = ivy.Container(a=ivy.array([2, 2, 2]), b=ivy.array([5, 5, 5]))
>>> z = x.log_poisson_loss(y)
>>> print(z)
{
a: ivy.array(3.3890561),
b: ivy.array(123.413159)
}
"""
return self._static_log_poisson_loss(
self,
target,
compute_full_loss=compute_full_loss,
axis=axis,
reduction=reduction,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
out=out,
)
@staticmethod
def _static_smooth_l1_loss(
input: Union[ivy.Container, ivy.Array, ivy.NativeArray],
target: Union[ivy.Container, ivy.Array, ivy.NativeArray],
/,
*,
beta: Optional[Union[float, ivy.Container]] = 1.0,
reduction: Optional[Union[str, ivy.Container]] = "mean",
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
out: Optional[ivy.Container] = None,
) -> ivy.Container:
"""ivy.Container static method variant of ivy.smooth_l1_loss. This
method simply wraps the function, and so the docstring for ivy.
smooth_l1_loss also applies to this method with minimal changes.
Parameters
----------
input
input array or container containing input labels.
target
input array or container containing the targeticted labels.
beta
a positive float value that sets the smoothness threshold.
Default: ``1.0``.
reduction
``'none'``: No reduction will be applied to the output.
``'mean'``: The output will be averaged.
``'sum'``: The output will be summed. Default: ``'mean'``.
key_chains
The key-chains to apply or not apply the method to. Default is ``None``.
to_apply
If input, the method will be applied to key_chains, otherwise key_chains
will be skipped. Default is ``input``.
prune_unapplied
Whether to prune key_chains for which the function was not applied.
Default is ``False``.
map_sequences
Whether to also map method to sequences (lists, tuples).
Default is ``False``.
out
optional output container, for writing the result to. It must have a shape
that the inputs broadcast to.
Returns
-------
ret
The smooth L1 loss between the input array and the targeticted labels.
Examples
--------
With :class:`ivy.Container` inputs:
>>> x = ivy.Container(a=ivy.array([1, 0, 2]), b=ivy.array([3, 2, 1]))
>>> y = ivy.Container(a=ivy.array([0.6, 0.2, 0.3]),
b=ivy.array([0.8, 0.2, 0.2]))
>>> z = ivy.Container.static_smooth_l1_loss(x, y)
>>> print(z)
{
a: ivy.array(0.9),
b: ivy.array(0.25)
}
With a mix of :class:`ivy.Array` and :class:`ivy.Container` inputs:
>>> x = ivy.array([1 , 0, 2])
>>> y = ivy.Container(a=ivy.array([0.6, 0.2, 0.3]),
b=ivy.array([0.8, 0.2, 0.2]))
>>> z = ivy.Container.static_smooth_l1_loss(x, y)
>>> print(z)
{
a: ivy.array(0.9),
b: ivy.array(0.25)
}
"""
return ContainerBase.cont_multi_map_in_function(
"smooth_l1_loss",
input,
target,
beta=beta,
reduction=reduction,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
out=out,
)
def smooth_l1_loss(
self: ivy.Container,
target: Union[ivy.Container, ivy.Array, ivy.NativeArray],
/,
*,
beta: Optional[Union[float, ivy.Container]] = 1.0,
reduction: Optional[Union[str, ivy.Container]] = "mean",
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
out: Optional[ivy.Container] = None,
) -> ivy.Container:
"""ivy.Container instance method variant of ivy.smooth_l1_loss. This
method simply wraps the function, and so the docstring for ivy.
smooth_l1_loss also applies to this method with minimal changes.
Parameters
----------
self
input container containing input labels.
target
input array or container containing the targeticted labels.
beta
a positive float value that sets the smoothness threshold.
Default: ``1.0``.
reduction
``'none'``: No reduction will be applied to the output.
``'mean'``: The output will be averaged.
``'sum'``: The output will be summed. Default: ``'mean'``.
key_chains
The key-chains to apply or not apply the method to. Default is
``None``.
to_apply
If input, the method will be applied to key_chains, otherwise
key_chains
will be skipped. Default is ``input``.
prune_unapplied
Whether to prune key_chains for which the function was not applied.
Default is ``False``.
map_sequences
Whether to also map method to sequences (lists, tuples).
Default is ``False``.
out
optional output container, for writing the result to.
It must have a shape
that the inputs broadcast to.
Returns
-------
ret
The smooth L1 loss between the input array and the targeticted labels.
Examples
--------
>>> x = ivy.Container(a=ivy.array([1, 0, 2]), b=ivy.array([3, 2, 1]))
>>> y = ivy.Container(a=ivy.array([0.6, 0.2, 0.3]),
... b=ivy.array([0.8, 0.2, 0.2]))
>>> z = x.smooth_l1_loss(y)
>>> print(z)
{
a: ivy.array(0.43333333),
b: ivy.array(1.10666666)
}
"""
return self._static_smooth_l1_loss(
self,
target,
beta=beta,
reduction=reduction,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
out=out,
)
@staticmethod
def _static_huber_loss(
true: Union[ivy.Container, ivy.Array, ivy.NativeArray],
pred: Union[ivy.Container, ivy.Array, ivy.NativeArray],
/,
*,
delta: Optional[Union[float, ivy.Container]] = 1.0,
reduction: Optional[Union[str, ivy.Container]] = "mean",
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
out: Optional[ivy.Container] = None,
) -> ivy.Container:
"""ivy.Container static method variant of huber_loss. This method
simply wraps the function, and so the docstring for huber_loss also
applies to this method with minimal changes.
Parameters
----------
true
true array or container containing true labels.
pred
true array or container containing the predicted labels.
delta
The threshold parameter that determines the point where the loss transitions
from squared error to absolute error. Default is 1.0.
reduction : str, optional
The type of reduction to apply to the loss.
Possible values are "mean" (default)
and "sum".
key_chains
The key-chains to apply or not apply the method to. Default is ``None``.
to_apply
If true, the method will be applied to key_chains, otherwise key_chains
will be skipped. Default is ``true``.
prune_unapplied
Whether to prune key_chains for which the function was not applied.
Default is ``False``.
map_sequences
Whether to also map method to sequences (lists, tuples).
Default is ``False``.
out
optional output container, for writing the result to. It must have a shape
that the true broadcast to.
Returns
-------
ret
The Huber loss between the true and predicted values.
Examples
--------
With :class:`ivy.Container` trues:
>>> x = ivy.Container(a=ivy.array([1, 0, 3]), b=ivy.array([0, 0, 2]))
>>> y = ivy.Container(a=ivy.array([1.5, 0.2, 2.8]), b=ivy.array([0.5, 0.2, 1.9])
)
>>> z = ivy.Container.static_huber_loss(x, y, delta=1.0)
>>> print(z)
{
a: ivy.array(0.0575),
b: ivy.array(0.005)
}
With a mix of :class:`ivy.Array` and :class:`ivy.Container` trues:
>>> x = ivy.array([1, 0, 3])
>>> y = ivy.Container(a=ivy.array([1.5, 0.2, 2.8]), b=ivy.array([0.5, 0.2, 1.9])
)
>>> z = ivy.Container.static_huber_loss(x, y, delta=1.0)
>>> print(z)
{
a: ivy.array(0.0575),
b: ivy.array(0.005)
}
"""
return ContainerBase.cont_multi_map_in_function(
"huber_loss",
true,
pred,
delta=delta,
reduction=reduction,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
out=out,
)
def huber_loss(
self: ivy.Container,
pred: Union[ivy.Container, ivy.Array, ivy.NativeArray],
/,
*,
delta: Optional[Union[float, ivy.Container]] = 1.0,
reduction: Optional[Union[str, ivy.Container]] = "mean",
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
out: Optional[ivy.Container] = None,
) -> ivy.Container:
"""ivy.Container instance method variant of huber_loss. This method
simply wraps the function, and so the docstring for huber_loss also
applies to this method with minimal changes.
Parameters
----------
self
true container containing true labels.
pred
true array or container containing the predicted labels.
delta
The threshold parameter that determines the point where the loss transitions
from squared error to absolute error. Default is 1.0.
reduction : str, optional
The type of reduction to apply to the loss.
Possible values are "mean" (default)
and "sum".
key_chains
The key-chains to apply or not apply the method to. Default is ``None``.
to_apply
If true, the method will be applied to key_chains, otherwise key_chains
will be skipped. Default is ``true``.
prune_unapplied
Whether to prune key_chains for which the function was not applied.
Default is ``False``.
map_sequences
Whether to also map method to sequences (lists, tuples).
Default is ``False``.
out
optional output container, for writing the result to. It must have a shape
that the trues broadcast to.
Returns
-------
ret
The Huber loss between the true and predicted values.
Examples
--------
>>> x = ivy.Container(a=ivy.array([1, 0, 3]), b=ivy.array([0, 0, 2]))
>>> y = ivy.Container(a=ivy.array([1.5, 0.2, 2.8]), b=ivy.array([0.5, 0.2, 1.9])
)
>>> z = x.huber_loss(y, delta=1.0)
>>> print(z)
{
a: ivy.array(0.0575),
b: ivy.array(0.005)
}
"""
return self._static_huber_loss(
self,
pred,
delta=delta,
reduction=reduction,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
out=out,
)
@staticmethod
def _static_soft_margin_loss(
input: Union[ivy.Container, ivy.Array, ivy.NativeArray],
target: Union[ivy.Container, ivy.Array, ivy.NativeArray],
/,
*,
reduction: Optional[Union[str, ivy.Container]] = "mean",
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
out: Optional[ivy.Container] = None,
) -> ivy.Container:
"""ivy.Container static method variant of ivy.soft_margin_loss. This
method simply wraps the function, and so the docstring for
ivy.soft_margin_loss also applies to this method with minimal changes.
# Insert the docstring here
Parameters
----------
input
input array or container containing input labels.
target
input array or container containing the targeticted labels.
reduction
the reduction method. Default: "mean".
key_chains
The key-chains to apply or not apply the method to. Default is None.
to_apply
If input, the method will be applied to key_chains, otherwise key_chains
will be skipped. Default is input.
prune_unapplied
Whether to prune key_chains for which the function was not applied.
Default is False.
map_sequences
Whether to also map method to sequences (lists, tuples).
Default is False.
out
optional output container, for writing the result to. It must have a shape
that the inputs broadcast to.
Returns
-------
ret
The soft margin loss between the given distributions.
"""
return ContainerBase.cont_multi_map_in_function(
"soft_margin_loss",
input,
target,
reduction=reduction,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
out=out,
)
def soft_margin_loss(
self: ivy.Container,
target: Union[ivy.Container, ivy.Array, ivy.NativeArray],
/,
*,
reduction: Optional[Union[str, ivy.Container]] = "mean",
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
out: Optional[ivy.Container] = None,
) -> ivy.Container:
"""ivy.Container instance method variant of ivy.soft_margin_loss. This
method simply wraps the function, and so the docstring for
ivy.soft_margin_loss also applies to this method with minimal changes.
# Insert the docstring here
Parameters
----------
self
input container containing input labels.
target
input array or container containing the targeticted labels.
reduction
the reduction method. Default: "mean".
key_chains
The key-chains to apply or not apply the method to. Default is None.
to_apply
If input, the method will be applied to key_chains, otherwise key_chains
will be skipped. Default is input.
prune_unapplied
Whether to prune key_chains for which the function was not applied.
Default is False.
map_sequences
Whether to also map method to sequences (lists, tuples).
Default is False.
out
optional output container, for writing the result to. It must have a shape
that the inputs broadcast to.
Returns
-------
ret
The soft margin loss between the given distributions.
"""
return self._static_soft_margin_loss(
self,
target,
reduction=reduction,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
out=out,
)
@staticmethod
def _static_kl_div(
input: Union[ivy.Container, ivy.Array, ivy.NativeArray],
target: Union[ivy.Container, ivy.Array, ivy.NativeArray],
/,
*,
reduction: Optional[Union[str, ivy.Container]] = "mean",
log_target=False,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
out: Optional[ivy.Container] = None,
) -> ivy.Container:
"""ivy.Container static method variant of ivy.kl_div. This method
simply wraps the function, and so the docstring for ivy.kl_div also
applies to this method with minimal changes.
Parameters
----------
input
input array or container containing input distribution.
target
input array or container containing target distribution.
reduction
the reduction method. Default: "mean".
key_chains
The key-chains to apply or not apply the method to. Default is None.
to_apply
If input, the method will be applied to key_chains, otherwise key_chains
will be skipped. Default is input.
prune_unapplied
Whether to prune key_chains for which the function was not applied.
Default is False.
map_sequences
Whether to also map method to sequences (lists, tuples).
Default is False.
out
optional output container, for writing the result to. It must have a shape
that the inputs broadcast to.
Returns
-------
ret
The Kullback-Leibler divergence loss between the given distributions.
"""
return ContainerBase.cont_multi_map_in_function(
"kl_div",
input,
target,
reduction=reduction,
log_target=log_target,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
out=out,
)
def kl_div(
self: ivy.Container,
target: Union[ivy.Container, ivy.Array, ivy.NativeArray],
/,
*,
reduction: Optional[Union[str, ivy.Container]] = "mean",
log_target=False,
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
out: Optional[ivy.Container] = None,
) -> ivy.Container:
"""ivy.Container instance method variant of ivy.kl_div. This method
simply wraps the function, and so the docstring for ivy.kl_div also
applies to this method with minimal changes.
Parameters
----------
self
input container containing input distribution.
target
input array or container containing target distribution.
reduction
the reduction method. Default: "mean".
key_chains
The key-chains to apply or not apply the method to. Default is None.
to_apply
If input, the method will be applied to key_chains, otherwise key_chains
will be skipped. Default is input.
prune_unapplied
Whether to prune key_chains for which the function was not applied.
Default is False.
map_sequences
Whether to also map method to sequences (lists, tuples).
Default is False.
out
optional output container, for writing the result to. It must have a shape
that the inputs broadcast to.
Returns
-------
ret
The Kullback-Leibler divergence loss between the given distributions.
"""
return self._static_kl_div(
self,
target,
reduction=reduction,
log_target=log_target,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
out=out,
)
@staticmethod
def _static_poisson_nll_loss(
input: Union[ivy.Container, ivy.Array, ivy.NativeArray],
target: Union[ivy.Container, ivy.Array, ivy.NativeArray],
*,
log_input: [Union[bool, ivy.Container]] = True,
full: [Union[bool, ivy.Container]] = False,
eps: [Union[float, ivy.Container]] = 1e-8,
reduction: [Union[str, ivy.Container]] = "mean",
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
) -> ivy.Container:
r"""ivy.Container static method variant of ivy.poisson_nll_loss. This
method simplywraps the function, and so the docstring for
ivy.poisson_nll_loss also applies to this method with minimal changes.
Parameters
----------
input
input array or container containing input labels.
target
input array or container containing the target labels.
log_input
If `True`, the loss is computed as
:math:`exp(input) - target * input`. If `False`, the loss is computed as
:math:`input - target * log(input + eps)`. Default is `True`.
full
Whether to compute the full loss, i.e.,
to add the Stirling approximation term
:math:`target * log(target) - target + 0.5 * log(2 * pi * target)`.
Default is `False`.
eps
Small value to prevent evaluation of `log(0)` when `log_input` is `False`.
Default is 1e-8.
reduction
Specifies the reduction applied to the output.
Options are 'none', 'mean', or 'sum'.
'none': no reduction will be applied. 'mean': the output will be averaged.
'sum': the output will be summed.
Default is 'mean'.
key_chains
The key-chains to apply or not apply the method to. Default is ``None``.
to_apply
If input, the method will be applied to key_chains, otherwise key_chains
will be skipped. Default is ``input``.
prune_unapplied
Whether to prune key_chains for which the function was not applied.
Default is ``False``.
map_sequences
Whether to also map method to sequences (lists, tuples).
Default is ``False``.
Returns
-------
ret
An array of the same shape as `input` representing
the Poisson Negative Log Likelihood Loss.
Raises
------
ValueError
If the `input` and `target` tensors do not have the same shape.
Examples
--------
With :class:`ivy.Container` inputs:
>>> x = ivy.Container(a=ivy.array([[0.6, 0.2, 0.3]], dtype=ivy.float32),
... b=ivy.array([[0.8, 0.2, 0.2]], dtype=ivy.float32))
>>> y = ivy.Container(a=ivy.array([[1, 0, 2]], dtype=ivy.float32),
... b=ivy.array([[3, 2, 1]], dtype=ivy.float32))
>>> z = ivy.Container._static_poisson_nll_loss(x,y)
>>> print(z)
{
a: ivy.array(1.06446016),
b: ivy.array(0.55611551)
}
With a mix of :class:`ivy.Array` and :class:`ivy.Container` inputs:
>>> x = ivy.array([[1, 0, 2]], dtype=ivy.float32)
>>> y = ivy.Container(a=ivy.array([[0.6, 0.2, 0.3]], dtype=ivy.float32),
... b=ivy.array([[0.8, 0.2, 0.2]], dtype=ivy.float32))
>>> z = ivy.Container._static_poisson_nll_loss(x, y)
>>> print(z)
{
a: ivy.array(3.30244565),
b: ivy.array(3.30244565)
}
"""
return ContainerBase.cont_multi_map_in_function(
"poisson_nll_loss",
input,
target,
log_input=log_input,
full=full,
eps=eps,
reduction=reduction,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
)
def poisson_nll_loss(
self: Union[ivy.Container, ivy.Array, ivy.NativeArray],
target: Union[ivy.Container, ivy.Array, ivy.NativeArray],
*,
log_input: [Union[bool, ivy.Container]] = True,
full: [Union[bool, ivy.Container]] = False,
eps: [Union[float, ivy.Container]] = 1e-8,
reduction: [Union[str, ivy.Container]] = "mean",
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
) -> ivy.Container:
r"""ivy.Container instance method variant of ivy.poisson_nll_loss. This
method simply wraps the function, and so the docstring for ivy.
poisson_nll_loss also applies to this method with minimal changes.
Parameters
----------
self
input array or container containing input labels.
target
input array or container containing the target labels.
log_input
If `True`, the loss is computed as
:math:`exp(input) - target * input`. If `False`, the loss is computed as
:math:`input - target * log(input + eps)`. Default is `True`.
full
Whether to compute the full loss, i.e.,
to add the Stirling approximation term
:math:`target * log(target) - target + 0.5 * log(2 * pi * target)`.
Default is `False`.
eps
Small value to prevent evaluation of `log(0)` when `log_input` is `False`.
Default is 1e-8.
reduction
Specifies the reduction applied to the output.
Options are 'none', 'mean', or 'sum'.
'none': no reduction will be applied. 'mean': the output will be averaged.
'sum': the output will be summed.
Default is 'mean'.
key_chains
The key-chains to apply or not apply the method to. Default is ``None``.
to_apply
If input, the method will be applied to key_chains, otherwise key_chains
will be skipped. Default is ``input``.
prune_unapplied
Whether to prune key_chains for which the function was not applied.
Default is ``False``.
map_sequences
Whether to also map method to sequences (lists, tuples).
Default is ``False``.
Returns
-------
ret
An array of the same shape as `input` representing
the Poisson Negative Log Likelihood Loss.
Raises
------
ValueError
If the `input` and `target` tensors do not have the same shape.
Examples
--------
>>> x = ivy.Container(a=ivy.array([[1, 0, 2]], dtype=ivy.float32),
... b=ivy.array([[3, 2, 1]], dtype=ivy.float32))
>>> y = ivy.Container(a=ivy.array([[0.6, 0.2, 0.3]], dtype=ivy.float32),
... b=ivy.array([[0.8, 0.2, 0.2]], dtype=ivy.float32))
>>> z = x.poisson_nll_loss(y)
>>> print(z)
{
a: ivy.array(3.30244565),
b: ivy.array(9.06429195)
}
"""
return self._static_poisson_nll_loss(
self,
target,
log_input=log_input,
full=full,
eps=eps,
reduction=reduction,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
)
@staticmethod
def _static_hinge_embedding_loss(
input: Union[ivy.Container, ivy.Array, ivy.NativeArray],
target: Union[ivy.Container, ivy.Array, ivy.NativeArray],
*,
margin: [Union[float, ivy.Container]] = 1.0,
reduction: [Union[str, ivy.Container]] = "mean",
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
) -> ivy.Container:
r"""ivy.Container static method variant of ivy.hinge_embedding_loss.
This method simplywraps the function, and so the docstring for
ivy.hinge_embedding_loss also applies to this method with minimal
changes.
Parameters
----------
input
input array or container containing input labels.
target
input array or container containing the target labels.
margin
Sets the hyperparameter margin. Determines the necessary input size
for hinge_embedding_loss calculations when label is -1. Inputs smaller
than the margin are minimized with hinge_embedding_loss.
Default is 1.0.
reduction
Specifies how to aggregate the loss across the batch. Options are:
- ``'none'``: Returns the unreduced loss.
- ``'mean'``: Returns the mean loss.
- ``'sum'``: Returns the summed loss.
Default is ``'mean'``.
key_chains
The key-chains to apply or not apply the method to. Default is ``None``.
to_apply
If input, the method will be applied to key_chains, otherwise key_chains
will be skipped. Default is ``input``.
prune_unapplied
Whether to prune key_chains for which the function was not applied.
Default is ``False``.
map_sequences
Whether to also map method to sequences (lists, tuples).
Default is ``False``.
Shape
-----
- Input: :math:`(*)` where :math:`*` means, any number of dimensions. \
The sum operation operates over all the elements.
- Target: :math:`(*)`, same shape as the input
- Output: scalar. If :attr:`reduction` is ``'none'``,
then same shape as the input
Returns
-------
ret
Hinge embedding loss calculated from the input and label,
shaped based on the reduction method.
Examples
--------
With :class:`ivy.Container` inputs:
>>> x = ivy.Container(a=ivy.array([[1, 0, 2]], dtype=ivy.float32),
... b=ivy.array([[-1, 1, 1]], dtype=ivy.float32))
>>> y = ivy.Container(a=ivy.array([[0.6, 0.2, 0.3]], dtype=ivy.float32),
... b=ivy.array([[1, 1, 1]], dtype=ivy.float32))
>>> z = ivy.Container._static_hinge_embedding_loss(x, y, reduction="none")
>>> z
{
a: ivy.array([[0., 0., 0.]]),
b: ivy.array([[-1., 1., 1.]])
}
With a mix of :class:`ivy.Array` and :class:`ivy.Container` inputs:
>>> x = ivy.array([[10, 20, 32]], dtype=ivy.float32)
>>> y = ivy.Container(a=ivy.array([[-1, -1, -1]], dtype=ivy.float32),
... b=ivy.array([[1, 1, 1]], dtype=ivy.float32))
>>> z = ivy.Container._static_hinge_embedding_loss(x, y,
... reduction="sum", margin=2.0)
>>> z
{
a: ivy.array(0.),
b: ivy.array(62.)
}
"""
return ContainerBase.cont_multi_map_in_function(
"hinge_embedding_loss",
input,
target,
margin=margin,
reduction=reduction,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
)
def hinge_embedding_loss(
self: Union[ivy.Container, ivy.Array, ivy.NativeArray],
target: Union[ivy.Container, ivy.Array, ivy.NativeArray],
*,
margin: [Union[float, ivy.Container]] = 1.0,
reduction: [Union[str, ivy.Container]] = "mean",
key_chains: Optional[Union[List[str], Dict[str, str], ivy.Container]] = None,
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
) -> ivy.Container:
r"""ivy.Container instance method variant of ivy.hinge_embedding_loss.
This method simply wraps the function, and so the docstring for
ivy.hinge_embedding_loss also applies to this method with minimal
changes.
Parameters
----------
input
input array or container containing input labels.
target
input array or container containing the target labels.
margin
Sets the hyperparameter margin. Determines the necessary input size
for hinge_embedding_loss calculations when label is -1. Inputs smaller
than the margin are minimized with hinge_embedding_loss.
Default is 1.0.
reduction
Specifies how to aggregate the loss across the batch. Options are:
- ``'none'``: Returns the unreduced loss.
- ``'mean'``: Returns the mean loss.
- ``'sum'``: Returns the summed loss.
Default is ``'mean'``.
key_chains
The key-chains to apply or not apply the method to. Default is ``None``.
to_apply
If input, the method will be applied to key_chains, otherwise key_chains
will be skipped. Default is ``input``.
prune_unapplied
Whether to prune key_chains for which the function was not applied.
Default is ``False``.
map_sequences
Whether to also map method to sequences (lists, tuples).
Default is ``False``.
Shape
-----
- Input: :math:`(*)` where :math:`*` means, any number of dimensions. \
The sum operation operates over all the elements.
- Target: :math:`(*)`, same shape as the input
- Output: scalar. If :attr:`reduction` is ``'none'``,
then same shape as the input
Returns
-------
ret
Hinge embedding loss calculated from the input and label,
shaped based on the reduction method.
Examples
--------
>>> x = ivy.Container(a=ivy.array([[1, 0, 2]], dtype=ivy.float32),
... b=ivy.array([[3, 2, 1]], dtype=ivy.float32))
>>> y = ivy.Container(a=ivy.array([[-1, -1, -1]], dtype=ivy.float32),
... b=ivy.array([[1, 1, 1]], dtype=ivy.float32))
>>> x.hinge_embedding_loss(y, reduction="none", margin=0.5)
{
a: ivy.array([[0., 0.5, 0.]]),
b: ivy.array([[3., 2., 1.]])
}
"""
return self._static_hinge_embedding_loss(
self,
target,
margin=margin,
reduction=reduction,
key_chains=key_chains,
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
)
| ivy/ivy/data_classes/container/experimental/losses.py/0 | {
"file_path": "ivy/ivy/data_classes/container/experimental/losses.py",
"repo_id": "ivy",
"token_count": 22860
} | 11 |
# global
from typing import Optional, List, Union
# local
import ivy
from ivy.data_classes.container.base import ContainerBase
# ToDo: implement all methods here as public instance methods
# noinspection PyMissingConstructor
class _ContainerWithNorms(ContainerBase):
def layer_norm(
self: Union[ivy.Array, ivy.NativeArray, ivy.Container],
normalized_idxs: List[Union[int, ivy.Container]],
/,
*,
scale: Optional[Union[ivy.Array, ivy.NativeArray, ivy.Container]] = None,
offset: Optional[Union[ivy.Array, ivy.NativeArray, ivy.Container]] = None,
eps: Union[float, ivy.Container] = 1e-05,
new_std: Union[float, ivy.Container] = 1.0,
out: Optional[Union[ivy.Array, ivy.Container]] = None,
) -> ivy.Container:
"""ivy.Container instance method variant of ivy.layer_norm. This method
simply wraps the function, and so the docstring for ivy.layer_norm also
applies to this method with minimal changes.
Parameters
----------
self
Input container
normalized_idxs
Indices to apply the normalization to.
scale
Learnable gamma variables for elementwise post-multiplication,
default is ``None``.
offset
Learnable beta variables for elementwise post-addition, default is ``None``.
eps
small constant to add to the denominator. Default is ``1e-05``.
new_std
The standard deviation of the new normalized values. Default is 1.
out
optional output container, for writing the result to. It must have a shape
that the inputs broadcast to.
Returns
-------
ret
The layer after applying layer normalization.
Examples
--------
With one :class:`ivy.Container` input:
>>> x = ivy.Container({'a': ivy.array([7., 10., 12.]),
... 'b': ivy.array([[1., 2., 3.], [4., 5., 6.]])})
>>> normalized_idxs = [0]
>>> norm = x.layer_norm(normalized_idxs, eps=1.25, scale=0.3)
>>> print(norm)
{
a: ivy.array([-0.34198591, 0.04274819, 0.29923761]),
b: ivy.array([[-0.24053511, -0.24053511, -0.24053511],
[0.24053511, 0.24053511, 0.24053511]])
}
With multiple :class:`ivy.Container` inputs:
>>> x = ivy.Container({'a': ivy.array([7., 10., 12.]),
... 'b': ivy.array([[1., 2., 3.], [4., 5., 6.]])})
>>> normalized_idxs = ivy.Container({'a': [0], 'b': [1]})
>>> new_std = ivy.Container({'a': 1.25, 'b': 1.5})
>>> bias = ivy.Container({'a': [0.2, 0.5, 0.7], 'b': 0.3})
>>> norm = x.layer_norm(normalized_idxs, new_std=new_std, offset=1)
>>> print(norm)
{
a: ivy.array([-1.62221265, 0.20277636, 1.41943574]),
b: ivy.array([[-1.83710337, 0., 1.83710337],
[-1.83710337, 0., 1.83710337]])
}
"""
return ivy.layer_norm(
self,
normalized_idxs,
scale=scale,
offset=offset,
eps=eps,
new_std=new_std,
out=out,
)
| ivy/ivy/data_classes/container/norms.py/0 | {
"file_path": "ivy/ivy/data_classes/container/norms.py",
"repo_id": "ivy",
"token_count": 1604
} | 12 |
# global
import abc
from typing import List, Tuple
# local
import ivy
class NestedArrayBase(abc.ABC):
"""Base class for nested array objects."""
def __init__(self, data, nested_rank, inner_shape, dtype, device, internal=False):
if not internal:
raise RuntimeError(
"NestedArray is an abstract class "
"and should not be instantiated directly."
"Please use one of the factory methods instead"
)
self._data = data
self._nested_rank = nested_rank
self._inner_shape = inner_shape
self._shape = [len(self._data)] + [None] * self._nested_rank + self._inner_shape
self._dtype = dtype
self._device = device
self._pre_repr = "ivy.NestedArray"
@classmethod
def nested_array(
cls, data, nested_rank=None, inner_shape=None, dtype=None, device=None
):
dtype = ivy.default_dtype(dtype=dtype, item=data)
device = ivy.default_device(device, item=data)
# convert all the leaf lists to ivy arrays, determine inner_shape and depth
det_inner_shape = []
# ToDo: add check for depth being the same for all nests
def _seq_to_ivy(x, depth=0):
if nested_rank is not None and depth >= nested_rank:
x = ivy.array(x, dtype=dtype, device=device)
depth += x.ndim - 1
if x.ndim > 1:
det_inner_shape.append(list(x.shape[1:]))
else:
det_inner_shape.append([])
elif (
isinstance(x, (list, tuple))
and len(x) != 0
and isinstance(x[0], (list, tuple))
):
depth_ret = None
for i, item in enumerate(x):
x = list(x) if isinstance(x, tuple) else x
x[i], depth_ret = _seq_to_ivy(item, depth=depth + 1)
depth = depth_ret if depth_ret else depth
else:
x = ivy.array(x, dtype=dtype, device=device)
if x.ndim > 1:
det_inner_shape.append(list(x.shape[1:]))
else:
det_inner_shape.append([])
return x, depth
if isinstance(data, (list, tuple)):
data, depth = _seq_to_ivy(data)
depth += 1
# make sure that all the elements of det_inner_shape are the same
if len(det_inner_shape) > 0:
if [det_inner_shape[0]] * len(det_inner_shape) != det_inner_shape:
raise ValueError(
"All the elements of the nested array must have the same "
f"inner shape, got: {det_inner_shape}"
)
det_inner_shape = det_inner_shape[0]
# defining default values for nested_rank and inner_shape
default_nested_rank = (
max(0, depth - 1)
if inner_shape is None
else max(0, depth - 1 - len(inner_shape))
)
default_inner_shape = [] if nested_rank is None else det_inner_shape
# determining actual values for nested_rank and inner_shape
nested_rank = (
nested_rank if nested_rank is not None else default_nested_rank
)
inner_shape = (
list(inner_shape) if inner_shape is not None else default_inner_shape
)
elif isinstance(data, cls):
data = data._data
nested_rank = nested_rank if nested_rank is not None else data.nested_rank
inner_shape = (
list(inner_shape) if list(inner_shape) is not None else data.inner_shape
)
else:
raise TypeError(f"Input data must be pylist or tuple, got: {type(data)}")
return cls(data, nested_rank, inner_shape, dtype, device, internal=True)
@staticmethod
def ragged_multi_map_in_function(fn, *args, **kwargs):
arg_nest_idxs = ivy.nested_argwhere(
args, ivy.is_ivy_nested_array, to_ignore=ivy.NestedArray
)
kwarg_nest_idxs = ivy.nested_argwhere(
kwargs, ivy.is_ivy_nested_array, to_ignore=ivy.NestedArray
)
# retrieve all the nested_array in args and kwargs
arg_nest = ivy.multi_index_nest(args, arg_nest_idxs)
kwarg_nest = ivy.multi_index_nest(kwargs, kwarg_nest_idxs)
num_arg_nest, num_kwarg_nest = len(arg_nest), len(kwarg_nest)
num_nest = num_arg_nest + num_kwarg_nest
inspect_fn = fn
if isinstance(fn, str):
inspect_fn = ivy.__dict__[fn]
nests = arg_nest + kwarg_nest
def map_fn(vals):
arg_vals = vals[:num_arg_nest]
a = ivy.copy_nest(args, to_mutable=True)
ivy.set_nest_at_indices(a, arg_nest_idxs, arg_vals)
kwarg_vals = vals[num_arg_nest:]
kw = ivy.copy_nest(kwargs, to_mutable=True)
ivy.set_nest_at_indices(kw, kwarg_nest_idxs, kwarg_vals)
return inspect_fn(*a, **kw)
if num_nest == 0:
raise ValueError(
f"No RaggedArrays found in args or kwargs of function {fn}"
)
ret = ivy.NestedArray.ragged_multi_map(map_fn, nests)
return ret
@staticmethod
def ragged_multi_map(fn, ragged_arrays):
args = []
for ragged in ragged_arrays:
args.append(ivy.copy_nest(ragged.data))
ret = ivy.nested_multi_map(lambda x, _: fn(x), args)
# infer dtype, shape, and device from the first array in the ret data
broadcasted_shape = ivy.NestedArray.broadcast_shapes(
[arg.shape for arg in ragged_arrays]
)
# infer ragged_rank from broadcasted shape
for i, dim in enumerate(broadcasted_shape[::-1]):
if dim is None:
nested_rank = len(broadcasted_shape) - i - 1
break
inner_shape = broadcasted_shape[nested_rank:]
arr0_id = ivy.nested_argwhere(ret, ivy.is_ivy_array, stop_after_n_found=1)[0]
arr0 = ivy.index_nest(ret, arr0_id)
ragged_ret = ivy.NestedArray.nested_array(
ret, nested_rank, inner_shape, arr0.dtype, arr0.device
)
return ragged_ret
@staticmethod
def replace_ivy_arrays(ragged_array, arrays):
data = ragged_array.data
ivy_idxs = ivy.nested_argwhere(data, ivy.is_ivy_array)
arr0 = arrays[0]
inner_shape, dev, dtype = arr0.shape.as_list(), arr0.device, arr0.dtype
ret = ivy.set_nest_at_indices(data, ivy_idxs, arrays, shallow=False)
return ivy.NestedArray.nested_array(
ret, ragged_array.nested_rank, inner_shape, dtype, dev
)
@staticmethod
def broadcast_shapes(shapes):
z = []
max_length = max(len(x) for x in shapes)
shape_list = list(shapes)
# making every shape the same length
for i, shape in enumerate(shapes):
if len(shape) != max_length:
shape_list[i] = [1] * (max_length - len(shape)) + shape
# broadcasting
for x in zip(*shape_list):
if None in x:
for dims in x:
if dims is not None and dims != 1:
raise ValueError(
f"Shapes {shapes[0]} and {shapes[1]} are not broadcastable"
)
z.append(None)
elif 1 in x:
dim_exist = False
for dims in x:
if dims != 1:
z.append(dims)
if dim_exist:
raise ValueError(
f"Shapes {shapes[0]} and {shapes[1]} are not"
" broadcastable"
)
else:
dim_exist = True
if not dim_exist:
z.append(1)
elif len(set(x)) == 1:
z.append(x[0])
else:
raise ValueError(
f"Shapes {shapes[0]} and {shapes[1]} are not broadcastable"
)
return z
def ragged_map(self, fn):
arg = ivy.copy_nest(self._data)
ivy.nested_map(lambda x: fn(x), arg, shallow=True)
# infer dtype, shape, and device from the first array in the ret data
arr0_id = ivy.nested_argwhere(arg, ivy.is_ivy_array, stop_after_n_found=1)[0]
arr0 = ivy.index_nest(arg, arr0_id)
inner_shape = arr0.shape.as_list()[1:]
ragged_ret = ivy.NestedArray.nested_array(
arg, self._nested_rank, inner_shape, arr0.dtype, arr0.device
)
return ragged_ret
def unbind(self):
return tuple(ivy.copy_nest(self._data))
# Properties #
# ---------- #
@property
def data(self) -> ivy.NativeArray:
"""The native array being wrapped in self."""
return self._data
@property
def dtype(self) -> ivy.Dtype:
"""Data type of the array elements."""
return self._dtype
@property
def device(self) -> ivy.Device:
"""Hardware device the array data resides on."""
return self._device
@property
def shape(self) -> List:
"""Array dimensions."""
return self._shape
@property
def ndim(self) -> int:
"""Number of array dimensions (axes)."""
return len(tuple(self._shape))
@property
def nested_rank(self) -> int:
"""Nested Rank."""
return self._nested_rank
@property
def inner_shape(self) -> Tuple[int]:
"""Inner Shape."""
return self._inner_shape
# Built-ins #
# ----------#
def __repr__(self):
rep = self._data.__repr__().replace("[ivy.array", "[")
rep = rep.replace("ivy.array", "\n\t").replace("(", "").replace(")", "")
ret = self._pre_repr + "(\n\t" + rep + "\n)"
return ret
def __getitem__(self, query):
ret = self._data[query]
if isinstance(ret, list):
return self.__class__.nested_array(
ret, self._nested_rank - 1, dtype=self._dtype, device=self._device
)
return ret
| ivy/ivy/data_classes/nested_array/base.py/0 | {
"file_path": "ivy/ivy/data_classes/nested_array/base.py",
"repo_id": "ivy",
"token_count": 5340
} | 13 |
use super::{ArrayElement, ElementType, PrimitiveType};
use crate::{c_lib, Error, Result};
use pyo3::prelude::*;
#[derive(Clone, PartialEq, Eq, Debug)]
#[pyclass(unsendable)]
pub struct ArrayShape {
ty: ElementType,
dims: Vec<i64>,
}
impl ArrayShape {
/// Create a new array shape.
pub fn new<E: ArrayElement>(dims: Vec<i64>) -> Self {
Self { ty: E::TY, dims }
}
/// Create a new array shape.
pub fn new_with_type(ty: ElementType, dims: Vec<i64>) -> Self {
Self { ty, dims }
}
pub fn element_type(&self) -> ElementType {
self.ty
}
pub fn ty(&self) -> ElementType {
self.ty
}
/// The stored primitive type.
pub fn primitive_type(&self) -> PrimitiveType {
self.ty.primitive_type()
}
/// The number of elements stored in arrays that use this shape, this is the product of sizes
/// across each dimension.
pub fn element_count(&self) -> usize {
self.dims.iter().map(|d| *d as usize).product::<usize>()
}
pub fn dims(&self) -> &[i64] {
&self.dims
}
pub fn first_dim(&self) -> Option<i64> {
self.dims.first().copied()
}
pub fn last_dim(&self) -> Option<i64> {
self.dims.last().copied()
}
}
/// A shape specifies a primitive type as well as some array dimensions.
#[derive(Clone, PartialEq, Eq, Debug)]
pub enum Shape {
Tuple(Vec<Shape>),
Array(ArrayShape),
Unsupported(PrimitiveType),
}
impl Shape {
/// Create a new array shape.
pub fn array<E: ArrayElement>(dims: Vec<i64>) -> Self {
Self::Array(ArrayShape { ty: E::TY, dims })
}
/// Create a new array shape.
pub fn array_with_type(ty: ElementType, dims: Vec<i64>) -> Self {
Self::Array(ArrayShape { ty, dims })
}
/// Create a new tuple shape.
pub fn tuple(shapes: Vec<Self>) -> Self {
Self::Tuple(shapes)
}
/// The stored primitive type.
pub fn primitive_type(&self) -> PrimitiveType {
match self {
Self::Tuple(_) => PrimitiveType::Tuple,
Self::Array(a) => a.ty.primitive_type(),
Self::Unsupported(ty) => *ty,
}
}
pub fn is_tuple(&self) -> bool {
match self {
Self::Tuple(_) => true,
Self::Array { .. } | Self::Unsupported(_) => false,
}
}
pub fn tuple_size(&self) -> Option<usize> {
match self {
Self::Tuple(shapes) => Some(shapes.len()),
Self::Array { .. } | Self::Unsupported(_) => None,
}
}
#[allow(dead_code)]
pub(crate) fn c_shape(&self) -> Result<CShape> {
match self {
Self::Tuple(shapes) => {
let shapes = shapes.iter().map(|s| s.c_shape()).collect::<Result<Vec<_>>>()?;
let ptrs: Vec<_> = shapes.iter().map(|s| s.0).collect();
let c_shape = CShape(unsafe { c_lib::make_shape_tuple(ptrs.len(), ptrs.as_ptr()) });
drop(shapes);
Ok(c_shape)
}
Self::Array(a) => {
let dims = a.dims();
Ok(CShape(unsafe {
c_lib::make_shape_array(a.primitive_type() as i32, dims.len(), dims.as_ptr())
}))
}
Self::Unsupported(_) => Err(Error::UnsupportedShape { shape: self.clone() }),
}
}
}
impl TryFrom<&Shape> for ArrayShape {
type Error = Error;
fn try_from(value: &Shape) -> Result<Self> {
match value {
Shape::Tuple(_) | Shape::Unsupported(_) => {
Err(Error::NotAnArray { expected: None, got: value.clone() })
}
Shape::Array(a) => Ok(a.clone()),
}
}
}
macro_rules! extract_dims {
($cnt:tt, $dims:expr, $out_type:ty) => {
impl TryFrom<&ArrayShape> for $out_type {
type Error = Error;
fn try_from(value: &ArrayShape) -> Result<Self> {
if value.dims.len() != $cnt {
Err(Error::UnexpectedNumberOfDims {
expected: $cnt,
got: value.dims.len(),
dims: value.dims.clone(),
})
} else {
Ok($dims(&value.dims))
}
}
}
impl TryFrom<&Shape> for $out_type {
type Error = Error;
fn try_from(value: &Shape) -> Result<Self> {
match value {
Shape::Tuple(_) | Shape::Unsupported(_) => {
Err(Error::NotAnArray { expected: Some($cnt), got: value.clone() })
}
Shape::Array(a) => Self::try_from(a),
}
}
}
};
}
extract_dims!(1, |d: &Vec<i64>| d[0], i64);
extract_dims!(2, |d: &Vec<i64>| (d[0], d[1]), (i64, i64));
extract_dims!(3, |d: &Vec<i64>| (d[0], d[1], d[2]), (i64, i64, i64));
extract_dims!(4, |d: &Vec<i64>| (d[0], d[1], d[2], d[3]), (i64, i64, i64, i64));
extract_dims!(5, |d: &Vec<i64>| (d[0], d[1], d[2], d[3], d[4]), (i64, i64, i64, i64, i64));
pub(crate) struct CShape(c_lib::shape);
impl CShape {
pub(crate) fn from_ptr(ptr: c_lib::shape) -> Self {
Self(ptr)
}
pub(crate) fn shape(&self) -> Result<Shape> {
fn from_ptr_rec(ptr: c_lib::shape) -> Result<Shape> {
let ty = unsafe { c_lib::shape_element_type(ptr) };
let ty = super::FromPrimitive::from_i32(ty)
.ok_or_else(|| Error::UnexpectedElementType(ty))?;
match ty {
PrimitiveType::Tuple => {
let elem_cnt = unsafe { c_lib::shape_tuple_shapes_size(ptr) };
let shapes: Result<Vec<_>> = (0..elem_cnt)
.map(|i| from_ptr_rec(unsafe { c_lib::shape_tuple_shapes(ptr, i as i32) }))
.collect();
Ok(Shape::Tuple(shapes?))
}
ty => match ty.element_type() {
Ok(ty) => {
let rank = unsafe { c_lib::shape_dimensions_size(ptr) };
let dims: Vec<_> =
(0..rank).map(|i| unsafe { c_lib::shape_dimensions(ptr, i) }).collect();
Ok(Shape::Array(ArrayShape { ty, dims }))
}
Err(_) => Ok(Shape::Unsupported(ty)),
},
}
}
from_ptr_rec(self.0)
}
pub(crate) fn as_ptr(&self) -> c_lib::shape {
self.0
}
}
impl Drop for CShape {
fn drop(&mut self) {
unsafe { c_lib::shape_free(self.0) };
}
}
| ivy/ivy/engines/XLA/rust_api/src/wrappers/shape.rs/0 | {
"file_path": "ivy/ivy/engines/XLA/rust_api/src/wrappers/shape.rs",
"repo_id": "ivy",
"token_count": 3508
} | 14 |
# global
from typing import Union, Optional
import jax
import jax.numpy as jnp
# local
import ivy
from ivy import (
default_float_dtype,
is_float_dtype,
)
from ivy import promote_types_of_inputs
from ivy.functional.backends.jax import JaxArray
from ivy.func_wrapper import with_unsupported_dtypes
from . import backend_version
def abs(
x: Union[float, JaxArray],
/,
*,
out: Optional[JaxArray] = None,
) -> JaxArray:
if (hasattr(x, "dtype") and "bool" in str(x.dtype)) or isinstance(x, bool):
return x
# jnp.where is used for consistent gradients
return jnp.where(x != 0, jnp.absolute(x), 0)
@with_unsupported_dtypes({"0.4.24 and below": ("complex",)}, backend_version)
def acos(x: JaxArray, /, *, out: Optional[JaxArray] = None) -> JaxArray:
return jnp.arccos(x)
def acosh(x: JaxArray, /, *, out: Optional[JaxArray] = None) -> JaxArray:
return jnp.arccosh(x)
def add(
x1: Union[float, JaxArray],
x2: Union[float, JaxArray],
/,
*,
alpha: Union[int, float] = 1,
out: Optional[JaxArray] = None,
) -> JaxArray:
x1, x2 = ivy.promote_types_of_inputs(x1, x2)
if alpha not in (1, None):
with ivy.ArrayMode(False):
x2 = multiply(x2, alpha)
return jnp.add(x1, x2)
@with_unsupported_dtypes({"0.4.24 and below": ("complex",)}, backend_version)
def asin(x: JaxArray, /, *, out: Optional[JaxArray] = None) -> JaxArray:
return jnp.arcsin(x)
@with_unsupported_dtypes({"0.4.24 and below": ("complex",)}, backend_version)
def asinh(x: JaxArray, /, *, out: Optional[JaxArray] = None) -> JaxArray:
return jnp.arcsinh(x)
def atan(x: JaxArray, /, *, out: Optional[JaxArray] = None) -> JaxArray:
return jnp.arctan(x)
def atan2(x1: JaxArray, x2: JaxArray, /, *, out: Optional[JaxArray] = None) -> JaxArray:
x1, x2 = ivy.promote_types_of_inputs(x1, x2)
return jnp.arctan2(x1, x2)
@with_unsupported_dtypes({"0.4.24 and below": ("complex",)}, backend_version)
def atanh(x: JaxArray, /, *, out: Optional[JaxArray] = None) -> JaxArray:
return jnp.arctanh(x)
@with_unsupported_dtypes({"0.4.24 and below": ("complex",)}, backend_version)
def bitwise_and(
x1: Union[int, JaxArray],
x2: Union[int, JaxArray],
/,
*,
out: Optional[JaxArray] = None,
) -> JaxArray:
x1, x2 = ivy.promote_types_of_inputs(x1, x2, array_api_promotion=True)
return jnp.bitwise_and(x1, x2)
@with_unsupported_dtypes({"0.4.24 and below": ("complex",)}, backend_version)
def bitwise_invert(
x: Union[int, JaxArray], /, *, out: Optional[JaxArray] = None
) -> JaxArray:
return jnp.bitwise_not(x)
@with_unsupported_dtypes({"0.4.24 and below": ("complex",)}, backend_version)
def bitwise_left_shift(
x1: Union[int, JaxArray],
x2: Union[int, JaxArray],
/,
*,
out: Optional[JaxArray] = None,
) -> JaxArray:
x1, x2 = ivy.promote_types_of_inputs(x1, x2, array_api_promotion=True)
return jnp.left_shift(x1, x2)
@with_unsupported_dtypes({"0.4.24 and below": ("complex",)}, backend_version)
def bitwise_or(
x1: Union[int, JaxArray],
x2: Union[int, JaxArray],
/,
*,
out: Optional[JaxArray] = None,
) -> JaxArray:
x1, x2 = ivy.promote_types_of_inputs(x1, x2, array_api_promotion=True)
return jnp.bitwise_or(x1, x2)
@with_unsupported_dtypes({"0.4.24 and below": ("complex",)}, backend_version)
def bitwise_right_shift(
x1: Union[int, JaxArray],
x2: Union[int, JaxArray],
/,
*,
out: Optional[JaxArray] = None,
) -> JaxArray:
x1, x2 = ivy.promote_types_of_inputs(x1, x2, array_api_promotion=True)
return jnp.right_shift(x1, x2)
@with_unsupported_dtypes({"0.4.24 and below": ("complex",)}, backend_version)
def bitwise_xor(
x1: Union[int, JaxArray],
x2: Union[int, JaxArray],
/,
*,
out: Optional[JaxArray] = None,
) -> JaxArray:
x1, x2 = ivy.promote_types_of_inputs(x1, x2, array_api_promotion=True)
return jnp.bitwise_xor(x1, x2)
@with_unsupported_dtypes({"0.4.24 and below": ("complex",)}, backend_version)
def ceil(x: JaxArray, /, *, out: Optional[JaxArray] = None) -> JaxArray:
if "int" in str(x.dtype):
return x
else:
return jnp.ceil(x)
def cos(x: JaxArray, /, *, out: Optional[JaxArray] = None) -> JaxArray:
return jnp.cos(x)
@with_unsupported_dtypes({"0.4.24 and below": ("float16",)}, backend_version)
def cosh(x: JaxArray, /, *, out: Optional[JaxArray] = None) -> JaxArray:
return jnp.cosh(x)
def divide(
x1: Union[float, JaxArray],
x2: Union[float, JaxArray],
/,
*,
out: Optional[JaxArray] = None,
) -> JaxArray:
x1, x2 = ivy.promote_types_of_inputs(x1, x2)
ret = jax.numpy.divide(x1, x2)
if ivy.is_float_dtype(x1.dtype) or ivy.is_complex_dtype(x1.dtype):
ret = jnp.asarray(ret, dtype=x1.dtype)
else:
ret = jnp.asarray(ret, dtype=ivy.default_float_dtype(as_native=True))
return ret
def equal(
x1: Union[float, JaxArray],
x2: Union[float, JaxArray],
/,
*,
out: Optional[JaxArray] = None,
) -> JaxArray:
x1, x2 = ivy.promote_types_of_inputs(x1, x2)
return jnp.equal(x1, x2)
def exp(x: JaxArray, /, *, out: Optional[JaxArray] = None) -> JaxArray:
return jnp.exp(x)
def expm1(x: JaxArray, /, *, out: Optional[JaxArray] = None) -> JaxArray:
return jnp.expm1(x)
@with_unsupported_dtypes({"0.4.24 and below": ("complex",)}, backend_version)
def floor(x: JaxArray, /, *, out: Optional[JaxArray] = None) -> JaxArray:
if "int" in str(x.dtype):
return x
else:
return jnp.floor(x)
@with_unsupported_dtypes({"0.4.24 and below": ("complex",)}, backend_version)
def floor_divide(
x1: Union[float, JaxArray],
x2: Union[float, JaxArray],
/,
*,
out: Optional[JaxArray] = None,
) -> JaxArray:
x1, x2 = ivy.promote_types_of_inputs(x1, x2)
return jnp.floor(jnp.divide(x1, x2)).astype(x1.dtype)
def fmin(
x1: JaxArray,
x2: JaxArray,
/,
*,
out: Optional[JaxArray] = None,
) -> JaxArray:
return jnp.fmin(x1, x2)
def greater(
x1: Union[float, JaxArray],
x2: Union[float, JaxArray],
/,
*,
out: Optional[JaxArray] = None,
) -> JaxArray:
x1, x2 = ivy.promote_types_of_inputs(x1, x2)
return jnp.greater(x1, x2)
def greater_equal(
x1: Union[float, JaxArray],
x2: Union[float, JaxArray],
/,
*,
out: Optional[JaxArray] = None,
) -> JaxArray:
x1, x2 = ivy.promote_types_of_inputs(x1, x2)
return jnp.greater_equal(x1, x2)
def isfinite(x: JaxArray, /, *, out: Optional[JaxArray] = None) -> JaxArray:
return jnp.isfinite(x)
@with_unsupported_dtypes({"0.4.24 and below": ("complex",)}, backend_version)
def isinf(
x: JaxArray,
/,
*,
detect_positive: bool = True,
detect_negative: bool = True,
out: Optional[JaxArray] = None,
) -> JaxArray:
if detect_positive and detect_negative:
return jnp.isinf(x)
elif detect_positive:
return jnp.isposinf(x)
elif detect_negative:
return jnp.isneginf(x)
return jnp.full_like(x, False, dtype=jnp.bool_)
def isnan(x: JaxArray, /, *, out: Optional[JaxArray] = None) -> JaxArray:
return jnp.isnan(x)
def lcm(x1: JaxArray, x2: JaxArray, /, *, out: Optional[JaxArray] = None) -> JaxArray:
x1, x2 = promote_types_of_inputs(x1, x2)
return jnp.lcm(x1, x2)
def less(
x1: Union[float, JaxArray],
x2: Union[float, JaxArray],
/,
*,
out: Optional[JaxArray] = None,
) -> JaxArray:
x1, x2 = ivy.promote_types_of_inputs(x1, x2)
return jnp.less(x1, x2)
def less_equal(
x1: Union[float, JaxArray],
x2: Union[float, JaxArray],
/,
*,
out: Optional[JaxArray] = None,
) -> JaxArray:
x1, x2 = ivy.promote_types_of_inputs(x1, x2)
return jnp.less_equal(x1, x2)
def log(x: JaxArray, /, *, out: Optional[JaxArray] = None) -> JaxArray:
return jnp.log(x)
def log10(x: JaxArray, /, *, out: Optional[JaxArray] = None) -> JaxArray:
return jnp.log10(x)
def log1p(x: JaxArray, /, *, out: Optional[JaxArray] = None) -> JaxArray:
return jnp.log1p(x)
def log2(x: JaxArray, /, *, out: Optional[JaxArray] = None) -> JaxArray:
return jnp.log2(x)
def logaddexp(
x1: JaxArray, x2: JaxArray, /, *, out: Optional[JaxArray] = None
) -> JaxArray:
return jnp.logaddexp(x1, x2)
def logaddexp2(
x1: Union[JaxArray, float, list, tuple],
x2: Union[JaxArray, float, list, tuple],
/,
*,
out: Optional[JaxArray] = None,
) -> JaxArray:
x1, x2 = promote_types_of_inputs(x1, x2)
if not is_float_dtype(x1):
x1 = x1.astype(default_float_dtype(as_native=True))
x2 = x2.astype(default_float_dtype(as_native=True))
return jnp.logaddexp2(x1, x2)
def logical_and(
x1: JaxArray, x2: JaxArray, /, *, out: Optional[JaxArray] = None
) -> JaxArray:
return jnp.logical_and(x1, x2)
def logical_not(x: JaxArray, /, *, out: Optional[JaxArray] = None) -> JaxArray:
return jnp.logical_not(x)
def logical_or(
x1: JaxArray, x2: JaxArray, /, *, out: Optional[JaxArray] = None
) -> JaxArray:
return jnp.logical_or(x1, x2)
def logical_xor(
x1: JaxArray, x2: JaxArray, /, *, out: Optional[JaxArray] = None
) -> JaxArray:
return jnp.logical_xor(x1, x2)
def multiply(
x1: Union[float, JaxArray],
x2: Union[float, JaxArray],
/,
*,
out: Optional[JaxArray] = None,
) -> JaxArray:
x1, x2 = ivy.promote_types_of_inputs(x1, x2)
return jnp.multiply(x1, x2)
def nan_to_num(
x: JaxArray,
/,
*,
copy: bool = True,
nan: Union[float, int] = 0.0,
posinf: Optional[Union[float, int]] = None,
neginf: Optional[Union[float, int]] = None,
out: Optional[JaxArray] = None,
) -> JaxArray:
return jnp.nan_to_num(x, copy=copy, nan=nan, posinf=posinf, neginf=neginf)
def negative(
x: Union[float, JaxArray], /, *, out: Optional[JaxArray] = None
) -> JaxArray:
return jnp.negative(x)
def not_equal(
x1: Union[float, JaxArray],
x2: Union[float, JaxArray],
/,
*,
out: Optional[JaxArray] = None,
) -> JaxArray:
x1, x2 = ivy.promote_types_of_inputs(x1, x2)
return jnp.not_equal(x1, x2)
def positive(
x: Union[float, JaxArray], /, *, out: Optional[JaxArray] = None
) -> JaxArray:
return jnp.positive(x)
def pow(
x1: JaxArray,
x2: Union[int, float, JaxArray],
/,
*,
out: Optional[JaxArray] = None,
) -> JaxArray:
x1, x2 = ivy.promote_types_of_inputs(x1, x2)
if (
ivy.any(x1 == 0)
and ivy.is_int_dtype(x1)
and ivy.any(x2 < 0)
and all(dtype not in str(x1.dtype) for dtype in ["int16", "int8"])
):
if ivy.is_int_dtype(x1):
fill_value = jnp.iinfo(x1.dtype).min
else:
fill_value = jnp.finfo(x1.dtype).min
ret = jnp.float_power(x1, x2)
return jnp.where(jnp.bitwise_and(x1 == 0, x2 < 0), fill_value, ret).astype(
x1.dtype
)
if ivy.is_int_dtype(x1) and ivy.any(x2 < 0):
return jnp.float_power(x1, x2).astype(x1.dtype)
return jnp.power(x1, x2)
@with_unsupported_dtypes({"0.4.24 and below": ("complex",)}, backend_version)
def remainder(
x1: Union[float, JaxArray],
x2: Union[float, JaxArray],
/,
*,
modulus: bool = True,
out: Optional[JaxArray] = None,
) -> JaxArray:
x1, x2 = ivy.promote_types_of_inputs(x1, x2)
if not modulus:
res = x1 / x2
res_floored = jnp.where(res >= 0, jnp.floor(res), jnp.ceil(res))
diff = res - res_floored
diff, x2 = ivy.promote_types_of_inputs(diff, x2)
return jnp.round(diff * x2).astype(x1.dtype)
return jnp.remainder(x1, x2)
def round(
x: JaxArray, /, *, decimals: int = 0, out: Optional[JaxArray] = None
) -> JaxArray:
if "int" in str(x.dtype):
ret = jnp.copy(x)
else:
ret = jnp.round(x, decimals=decimals)
if ivy.exists(out):
return ivy.inplace_update(out, ret)
return ret
def _abs_variant_sign(x):
return jnp.where(x != 0, x / jnp.abs(x), 0)
def sign(
x: JaxArray, /, *, np_variant: Optional[bool] = True, out: Optional[JaxArray] = None
) -> JaxArray:
if "complex" in str(x.dtype):
return jnp.sign(x) if np_variant else _abs_variant_sign(x)
return jnp.where(x == -0.0, 0.0, jnp.sign(x)).astype(x.dtype)
def sin(x: JaxArray, /, *, out: Optional[JaxArray] = None) -> JaxArray:
return jnp.sin(x)
def sinh(x: JaxArray, /, *, out: Optional[JaxArray] = None) -> JaxArray:
return jnp.sinh(x)
def sqrt(x: JaxArray, /, *, out: Optional[JaxArray] = None) -> JaxArray:
return jnp.sqrt(x)
def square(x: JaxArray, /, *, out: Optional[JaxArray] = None) -> JaxArray:
return jnp.square(x)
def subtract(
x1: Union[float, JaxArray],
x2: Union[float, JaxArray],
/,
*,
alpha: Optional[Union[int, float]] = None,
out: Optional[JaxArray] = None,
) -> JaxArray:
x1, x2 = ivy.promote_types_of_inputs(x1, x2)
if alpha not in (1, None):
ivy.set_array_mode(False)
x2 = multiply(x2, alpha)
ivy.unset_array_mode()
return jnp.subtract(x1, x2)
def trapz(
y: JaxArray,
/,
*,
x: Optional[JaxArray] = None,
dx: float = 1.0,
axis: int = -1,
out: Optional[JaxArray] = None,
) -> JaxArray:
return jnp.trapz(y, x=x, dx=dx, axis=axis)
@with_unsupported_dtypes(
{"0.4.24 and below": ("complex", "float16", "bfloat16")}, backend_version
)
def tan(x: JaxArray, /, *, out: Optional[JaxArray] = None) -> JaxArray:
return jnp.tan(x)
def tanh(
x: JaxArray, /, *, complex_mode="jax", out: Optional[JaxArray] = None
) -> JaxArray:
return jnp.tanh(x)
@with_unsupported_dtypes({"0.4.24 and below": ("complex",)}, backend_version)
def trunc(x: JaxArray, /, *, out: Optional[JaxArray] = None) -> JaxArray:
if "int" in str(x.dtype):
return x
else:
return jnp.trunc(x)
def exp2(
x: Union[JaxArray, float, list, tuple],
/,
*,
out: Optional[JaxArray] = None,
) -> JaxArray:
return jnp.power(2, x)
def imag(
val: JaxArray,
/,
*,
out: Optional[JaxArray] = None,
) -> JaxArray:
return jnp.imag(val)
def angle(
z: JaxArray,
/,
*,
deg: bool = False,
out: Optional[JaxArray] = None,
) -> JaxArray:
return jnp.angle(z, deg=deg)
# Extra #
# ------#
@with_unsupported_dtypes({"0.4.24 and below": ("complex",)}, backend_version)
def erf(x: JaxArray, /, *, out: Optional[JaxArray] = None) -> JaxArray:
return jax.scipy.special.erf(x)
def maximum(
x1: Union[float, JaxArray],
x2: Union[float, JaxArray],
/,
*,
use_where: bool = True,
out: Optional[JaxArray] = None,
) -> JaxArray:
x1, x2 = ivy.promote_types_of_inputs(x1, x2)
if use_where:
return jnp.where(x1 >= x2, x1, x2)
return jnp.maximum(x1, x2)
def minimum(
x1: Union[float, JaxArray],
x2: Union[float, JaxArray],
/,
*,
use_where: bool = True,
out: Optional[JaxArray] = None,
) -> JaxArray:
x1, x2 = ivy.promote_types_of_inputs(x1, x2)
if use_where:
return jnp.where(x1 <= x2, x1, x2)
return jnp.minimum(x1, x2)
def reciprocal(
x: Union[float, JaxArray], /, *, out: Optional[JaxArray] = None
) -> JaxArray:
return jnp.reciprocal(x)
def deg2rad(x: JaxArray, /, *, out: Optional[JaxArray] = None) -> JaxArray:
return jnp.deg2rad(x)
def rad2deg(x: JaxArray, /, *, out: Optional[JaxArray] = None) -> JaxArray:
return jnp.rad2deg(x)
def isreal(x: JaxArray, /, *, out: Optional[JaxArray] = None) -> JaxArray:
return jnp.isreal(x)
@with_unsupported_dtypes({"0.4.24 and below": ("complex",)}, backend_version)
def fmod(
x1: JaxArray,
x2: JaxArray,
/,
*,
out: Optional[JaxArray] = None,
) -> JaxArray:
x1, x2 = promote_types_of_inputs(x1, x2)
return jnp.fmod(x1, x2)
def gcd(
x1: Union[JaxArray, float, list, tuple],
x2: Union[JaxArray, float, list, tuple],
/,
*,
out: Optional[JaxArray] = None,
) -> JaxArray:
x1, x2 = promote_types_of_inputs(x1, x2)
return jnp.gcd(x1, x2)
def real(x: JaxArray, /, *, out: Optional[JaxArray] = None) -> JaxArray:
return jnp.real(x)
| ivy/ivy/functional/backends/jax/elementwise.py/0 | {
"file_path": "ivy/ivy/functional/backends/jax/elementwise.py",
"repo_id": "ivy",
"token_count": 7743
} | 15 |
# global
import jax.numpy as jnp
from typing import Optional, Tuple
# local
from ivy.functional.backends.jax import JaxArray
def unravel_index(
indices: JaxArray,
shape: Tuple[int],
/,
*,
out: Optional[JaxArray] = None,
) -> Tuple[JaxArray]:
return jnp.unravel_index(indices.astype(jnp.int32), shape)
| ivy/ivy/functional/backends/jax/experimental/searching.py/0 | {
"file_path": "ivy/ivy/functional/backends/jax/experimental/searching.py",
"repo_id": "ivy",
"token_count": 133
} | 16 |
# global
import jax.numpy as jnp
from typing import Union, Optional, Sequence
# local
import ivy
from ivy.func_wrapper import with_unsupported_dtypes
from ivy.functional.backends.jax import JaxArray
from . import backend_version
# Array API Standard #
# -------------------#
def min(
x: JaxArray,
/,
*,
axis: Optional[Union[int, Sequence[int]]] = None,
keepdims: bool = False,
initial: Optional[Union[int, float, complex]] = None,
where: Optional[JaxArray] = None,
out: Optional[JaxArray] = None,
) -> JaxArray:
axis = tuple(axis) if isinstance(axis, list) else axis
return jnp.min(
a=jnp.asarray(x), axis=axis, keepdims=keepdims, initial=initial, where=where
)
def max(
x: JaxArray,
/,
*,
axis: Optional[Union[int, Sequence[int]]] = None,
keepdims: bool = False,
out: Optional[JaxArray] = None,
) -> JaxArray:
axis = tuple(axis) if isinstance(axis, list) else axis
return jnp.max(a=jnp.asarray(x), axis=axis, keepdims=keepdims)
@with_unsupported_dtypes(
{"0.4.24 and below": "bfloat16"},
backend_version,
)
def mean(
x: JaxArray,
/,
*,
axis: Optional[Union[int, Sequence[int]]] = None,
keepdims: bool = False,
out: Optional[JaxArray] = None,
) -> JaxArray:
axis = tuple(axis) if isinstance(axis, list) else axis
return jnp.mean(x, axis=axis, keepdims=keepdims, dtype=x.dtype)
def _infer_dtype(dtype: jnp.dtype):
default_dtype = ivy.infer_default_dtype(dtype)
if ivy.dtype_bits(dtype) < ivy.dtype_bits(default_dtype):
return default_dtype
return dtype
def prod(
x: JaxArray,
/,
*,
axis: Optional[Union[int, Sequence[int]]] = None,
dtype: Optional[jnp.dtype] = None,
keepdims: bool = False,
out: Optional[JaxArray] = None,
) -> JaxArray:
dtype = ivy.as_native_dtype(dtype)
if dtype is None:
dtype = _infer_dtype(x.dtype)
axis = tuple(axis) if isinstance(axis, list) else axis
return jnp.prod(a=x, axis=axis, dtype=dtype, keepdims=keepdims)
def std(
x: JaxArray,
/,
*,
axis: Optional[Union[int, Sequence[int]]] = None,
correction: Union[int, float] = 0.0,
keepdims: bool = False,
out: Optional[JaxArray] = None,
) -> JaxArray:
axis = tuple(axis) if isinstance(axis, list) else axis
return jnp.std(x, axis=axis, ddof=correction, keepdims=keepdims)
def sum(
x: JaxArray,
/,
*,
axis: Optional[Union[int, Sequence[int]]] = None,
dtype: Optional[jnp.dtype] = None,
keepdims: Optional[bool] = False,
out: Optional[JaxArray] = None,
) -> JaxArray:
dtype = ivy.as_native_dtype(dtype)
if dtype is None:
dtype = x.dtype
if dtype != x.dtype and not ivy.is_bool_dtype(x):
x = x.astype(dtype)
axis = tuple(axis) if isinstance(axis, list) else axis
return jnp.sum(a=x, axis=axis, dtype=dtype, keepdims=keepdims)
def var(
x: JaxArray,
/,
*,
axis: Optional[Union[int, Sequence[int]]] = None,
correction: Union[int, float] = 0.0,
keepdims: bool = False,
out: Optional[JaxArray] = None,
) -> JaxArray:
if axis is None:
axis = tuple(range(len(x.shape)))
axis = (axis,) if isinstance(axis, int) else tuple(axis)
if isinstance(correction, int):
ret = jnp.var(x, axis=axis, ddof=correction, keepdims=keepdims, out=out)
return ivy.astype(ret, x.dtype, copy=False)
if x.size == 0:
return jnp.asarray(float("nan"))
size = 1
for a in axis:
size *= x.shape[a]
if size == correction:
size += 0.0001 # to avoid division by zero in return
return ivy.astype(
jnp.multiply(
jnp.var(x, axis=axis, keepdims=keepdims, out=out),
size / jnp.abs(size - correction),
),
x.dtype,
copy=False,
)
# Extra #
# ------#
@with_unsupported_dtypes({"0.4.24 and below": ("bfloat16", "bool")}, backend_version)
def cumprod(
x: JaxArray,
/,
*,
axis: int = 0,
exclusive: bool = False,
reverse: bool = False,
dtype: Optional[jnp.dtype] = None,
out: Optional[JaxArray] = None,
) -> JaxArray:
dtype = ivy.as_native_dtype(dtype)
if dtype is None:
if dtype is jnp.bool_:
dtype = ivy.default_int_dtype(as_native=True)
else:
dtype = _infer_dtype(x.dtype)
if not (exclusive or reverse):
return jnp.cumprod(x, axis, dtype=dtype)
elif exclusive and reverse:
x = jnp.cumprod(jnp.flip(x, axis=(axis,)), axis=axis, dtype=dtype)
x = jnp.swapaxes(x, axis, -1)
x = jnp.concatenate((jnp.ones_like(x[..., -1:]), x[..., :-1]), -1)
x = jnp.swapaxes(x, axis, -1)
return jnp.flip(x, axis=(axis,))
elif exclusive:
x = jnp.swapaxes(x, axis, -1)
x = jnp.concatenate((jnp.ones_like(x[..., -1:]), x[..., :-1]), -1)
x = jnp.cumprod(x, -1, dtype=dtype)
return jnp.swapaxes(x, axis, -1)
else:
x = jnp.cumprod(jnp.flip(x, axis=(axis,)), axis=axis, dtype=dtype)
return jnp.flip(x, axis=axis)
@with_unsupported_dtypes({"0.4.24 and below": "bool"}, backend_version)
def cumsum(
x: JaxArray,
axis: int = 0,
exclusive: bool = False,
reverse: bool = False,
*,
dtype: Optional[jnp.dtype] = None,
out: Optional[JaxArray] = None,
) -> JaxArray:
dtype = ivy.as_native_dtype(dtype)
if dtype is None:
if dtype is jnp.bool_:
dtype = ivy.default_int_dtype(as_native=True)
elif ivy.is_int_dtype(x.dtype):
dtype = ivy.promote_types(x.dtype, ivy.default_int_dtype(as_native=True))
else:
dtype = _infer_dtype(x.dtype)
if exclusive or reverse:
if exclusive and reverse:
x = jnp.cumsum(jnp.flip(x, axis=axis), axis=axis, dtype=dtype)
x = jnp.swapaxes(x, axis, -1)
x = jnp.concatenate((jnp.zeros_like(x[..., -1:]), x[..., :-1]), -1)
x = jnp.swapaxes(x, axis, -1)
res = jnp.flip(x, axis=axis)
elif exclusive:
x = jnp.swapaxes(x, axis, -1)
x = jnp.concatenate((jnp.zeros_like(x[..., -1:]), x[..., :-1]), -1)
x = jnp.cumsum(x, -1, dtype=dtype)
res = jnp.swapaxes(x, axis, -1)
elif reverse:
x = jnp.cumsum(jnp.flip(x, axis=axis), axis=axis, dtype=dtype)
res = jnp.flip(x, axis=axis)
return res
return jnp.cumsum(x, axis, dtype=dtype)
def einsum(
equation: str, *operands: JaxArray, out: Optional[JaxArray] = None
) -> JaxArray:
return jnp.einsum(equation, *operands)
| ivy/ivy/functional/backends/jax/statistical.py/0 | {
"file_path": "ivy/ivy/functional/backends/jax/statistical.py",
"repo_id": "ivy",
"token_count": 3184
} | 17 |
# global
import mxnet as mx
from typing import Union, Optional, Tuple, Literal, List, Sequence
from collections import namedtuple
# local
from ivy import inf
from ivy.utils.exceptions import IvyNotImplementedException
def cholesky(
x: Union[(None, mx.ndarray.NDArray)],
/,
*,
upper: bool = False,
out: Optional[Union[(None, mx.ndarray.NDArray)]] = None,
) -> Union[(None, mx.ndarray.NDArray)]:
raise IvyNotImplementedException()
def cross(
x1: Union[(None, mx.ndarray.NDArray)],
x2: Union[(None, mx.ndarray.NDArray)],
/,
*,
axisa: int = (-1),
axisb: int = (-1),
axisc: int = (-1),
axis: Optional[int] = None,
out: Optional[Union[(None, mx.ndarray.NDArray)]] = None,
) -> Union[(None, mx.ndarray.NDArray)]:
raise IvyNotImplementedException()
def det(
x: Union[(None, mx.ndarray.NDArray)],
/,
*,
out: Optional[Union[(None, mx.ndarray.NDArray)]] = None,
) -> Union[(None, mx.ndarray.NDArray)]:
return mx.nd.linalg.det(x)
def diagonal(
x: Union[(None, mx.ndarray.NDArray)],
/,
*,
offset: int = 0,
axis1: int = (-2),
axis2: int = (-1),
out: Optional[Union[(None, mx.ndarray.NDArray)]] = None,
) -> Union[(None, mx.ndarray.NDArray)]:
raise IvyNotImplementedException()
def eig(
x: Union[(None, mx.ndarray.NDArray)],
/,
*,
out: Optional[Union[(None, mx.ndarray.NDArray)]] = None,
) -> Tuple[Union[(None, mx.ndarray.NDArray)]]:
raise IvyNotImplementedException()
def eigh(
x: Union[(None, mx.ndarray.NDArray)],
/,
*,
UPLO: str = "L",
out: Optional[Union[(None, mx.ndarray.NDArray)]] = None,
) -> Tuple[Union[(None, mx.ndarray.NDArray)]]:
raise IvyNotImplementedException()
def eigvalsh(
x: Union[(None, mx.ndarray.NDArray)],
/,
*,
UPLO: str = "L",
out: Optional[Union[(None, mx.ndarray.NDArray)]] = None,
) -> Union[(None, mx.ndarray.NDArray)]:
raise IvyNotImplementedException()
def inner(
x1: Union[(None, mx.ndarray.NDArray)],
x2: Union[(None, mx.ndarray.NDArray)],
/,
*,
out: Optional[Union[(None, mx.ndarray.NDArray)]] = None,
) -> Union[(None, mx.ndarray.NDArray)]:
raise IvyNotImplementedException()
def inv(
x: Union[(None, mx.ndarray.NDArray)],
/,
*,
adjoint: bool = False,
out: Optional[Union[(None, mx.ndarray.NDArray)]] = None,
) -> Union[(None, mx.ndarray.NDArray)]:
raise IvyNotImplementedException()
def matmul(
x1: Union[(None, mx.ndarray.NDArray)],
x2: Union[(None, mx.ndarray.NDArray)],
/,
*,
transpose_a: bool = False,
transpose_b: bool = False,
adjoint_a: bool = False,
adjoint_b: bool = False,
out: Optional[Union[(None, mx.ndarray.NDArray)]] = None,
) -> Union[(None, mx.ndarray.NDArray)]:
raise IvyNotImplementedException()
def matrix_norm(
x: Union[(None, mx.ndarray.NDArray)],
/,
*,
ord: Union[(int, float, Literal[(inf, (-inf), "fro", "nuc")])] = "fro",
axis: Tuple[(int, int)] = ((-2), (-1)),
keepdims: bool = False,
out: Optional[Union[(None, mx.ndarray.NDArray)]] = None,
) -> Union[(None, mx.ndarray.NDArray)]:
raise IvyNotImplementedException()
def matrix_power(
x: Union[(None, mx.ndarray.NDArray)],
n: int,
/,
*,
out: Optional[Union[(None, mx.ndarray.NDArray)]] = None,
) -> Union[(None, mx.ndarray.NDArray)]:
raise IvyNotImplementedException()
def matrix_rank(
x: Union[(None, mx.ndarray.NDArray)],
/,
*,
atol: Optional[Union[(float, Tuple[float])]] = None,
rtol: Optional[Union[(float, Tuple[float])]] = None,
out: Optional[Union[(None, mx.ndarray.NDArray)]] = None,
) -> Union[(None, mx.ndarray.NDArray)]:
raise IvyNotImplementedException()
def matrix_transpose(
x: Union[(None, mx.ndarray.NDArray)],
/,
*,
conjugate: bool = False,
out: Optional[Union[(None, mx.ndarray.NDArray)]] = None,
) -> Union[(None, mx.ndarray.NDArray)]:
raise IvyNotImplementedException()
def outer(
x1: Union[(None, mx.ndarray.NDArray)],
x2: Union[(None, mx.ndarray.NDArray)],
/,
*,
out: Optional[Union[(None, mx.ndarray.NDArray)]] = None,
) -> Union[(None, mx.ndarray.NDArray)]:
raise IvyNotImplementedException()
def pinv(
x: Union[(None, mx.ndarray.NDArray)],
/,
*,
rtol: Optional[Union[(float, Tuple[float])]] = None,
out: Optional[Union[(None, mx.ndarray.NDArray)]] = None,
) -> Union[(None, mx.ndarray.NDArray)]:
raise IvyNotImplementedException()
def qr(
x: Union[(None, mx.ndarray.NDArray)],
/,
*,
mode: str = "reduced",
out: Optional[
Tuple[(Union[(None, mx.ndarray.NDArray)], Union[(None, mx.ndarray.NDArray)])]
] = None,
) -> Tuple[(Union[(None, mx.ndarray.NDArray)], Union[(None, mx.ndarray.NDArray)])]:
res = namedtuple("qr", ["Q", "R"])
q, r = mx.np.linalg.qr(x, mode=mode)
return res(q, r)
def slogdet(
x: Union[(None, mx.ndarray.NDArray)], /
) -> Tuple[(Union[(None, mx.ndarray.NDArray)], Union[(None, mx.ndarray.NDArray)])]:
raise IvyNotImplementedException()
def solve(
x1: Union[(None, mx.ndarray.NDArray)],
x2: Union[(None, mx.ndarray.NDArray)],
/,
*,
adjoint: bool = False,
out: Optional[Union[(None, mx.ndarray.NDArray)]] = None,
) -> Union[(None, mx.ndarray.NDArray)]:
raise IvyNotImplementedException()
def svd(
x: Union[(None, mx.ndarray.NDArray)],
/,
*,
full_matrices: bool = True,
compute_uv: bool = True,
) -> Union[
(Union[(None, mx.ndarray.NDArray)], Tuple[(Union[(None, mx.ndarray.NDArray)], ...)])
]:
raise IvyNotImplementedException()
def svdvals(
x: Union[(None, mx.ndarray.NDArray)],
/,
*,
driver: Optional[str] = None,
out: Optional[Union[(None, mx.ndarray.NDArray)]] = None,
) -> Union[(None, mx.ndarray.NDArray)]:
# TODO: handling the driver argument
raise IvyNotImplementedException()
def tensordot(
x1: Union[(None, mx.ndarray.NDArray)],
x2: Union[(None, mx.ndarray.NDArray)],
/,
*,
axes: Union[(int, Tuple[(List[int], List[int])])] = 2,
out: Optional[Union[(None, mx.ndarray.NDArray)]] = None,
) -> Union[(None, mx.ndarray.NDArray)]:
raise IvyNotImplementedException()
def trace(
x: Union[(None, mx.ndarray.NDArray)],
/,
*,
offset: int = 0,
axis1: int = 0,
axis2: int = 1,
out: Optional[Union[(None, mx.ndarray.NDArray)]] = None,
) -> Union[(None, mx.ndarray.NDArray)]:
raise IvyNotImplementedException()
def vecdot(
x1: Union[(None, mx.ndarray.NDArray)],
x2: Union[(None, mx.ndarray.NDArray)],
/,
*,
axis: int = (-1),
out: Optional[Union[(None, mx.ndarray.NDArray)]] = None,
) -> Union[(None, mx.ndarray.NDArray)]:
raise IvyNotImplementedException()
def vector_norm(
x: Union[(None, mx.ndarray.NDArray)],
/,
*,
axis: Optional[Union[(int, Sequence[int])]] = None,
keepdims: bool = False,
ord: Union[(int, float, Literal[(inf, (-inf))])] = 2,
dtype: Optional[None] = None,
out: Optional[Union[(None, mx.ndarray.NDArray)]] = None,
) -> Union[(None, mx.ndarray.NDArray)]:
raise IvyNotImplementedException()
def diag(
x: Union[(None, mx.ndarray.NDArray)],
/,
*,
k: int = 0,
out: Optional[Union[(None, mx.ndarray.NDArray)]] = None,
) -> Union[(None, mx.ndarray.NDArray)]:
raise IvyNotImplementedException()
def vander(
x: Union[(None, mx.ndarray.NDArray)],
/,
*,
N: Optional[int] = None,
increasing: bool = False,
out: Optional[Union[(None, mx.ndarray.NDArray)]] = None,
) -> Union[(None, mx.ndarray.NDArray)]:
raise IvyNotImplementedException()
def vector_to_skew_symmetric_matrix(
vector: Union[(None, mx.ndarray.NDArray)],
/,
*,
out: Optional[Union[(None, mx.ndarray.NDArray)]] = None,
) -> Union[(None, mx.ndarray.NDArray)]:
raise IvyNotImplementedException()
| ivy/ivy/functional/backends/mxnet/linear_algebra.py/0 | {
"file_path": "ivy/ivy/functional/backends/mxnet/linear_algebra.py",
"repo_id": "ivy",
"token_count": 3593
} | 18 |
# global
from typing import Union, Optional
import numpy as np
# local
import ivy
from ivy.func_wrapper import with_unsupported_dtypes
from ivy import promote_types_of_inputs
from ivy.functional.backends.numpy.helpers import _scalar_output_to_0d_array
from . import backend_version
@_scalar_output_to_0d_array
def abs(
x: Union[float, np.ndarray],
/,
*,
out: Optional[np.ndarray] = None,
) -> np.ndarray:
return np.absolute(x, out=out)
abs.support_native_out = True
@_scalar_output_to_0d_array
def acos(x: np.ndarray, /, *, out: Optional[np.ndarray] = None) -> np.ndarray:
return np.arccos(x, out=out)
acos.support_native_out = True
@_scalar_output_to_0d_array
def acosh(x: np.ndarray, /, *, out: Optional[np.ndarray] = None) -> np.ndarray:
return np.arccosh(x, out=out)
acosh.support_native_out = True
@_scalar_output_to_0d_array
def add(
x1: Union[float, np.ndarray],
x2: Union[float, np.ndarray],
/,
*,
alpha: Optional[Union[int, float]] = None,
out: Optional[np.ndarray] = None,
) -> np.ndarray:
x1, x2 = ivy.promote_types_of_inputs(x1, x2)
if alpha not in (1, None):
with ivy.ArrayMode(False):
x2 = multiply(x2, alpha)
return np.add(x1, x2, out=out)
add.support_native_out = True
@_scalar_output_to_0d_array
def asin(x: np.ndarray, /, *, out: Optional[np.ndarray] = None) -> np.ndarray:
return np.arcsin(x, out=out)
asin.support_native_out = True
@_scalar_output_to_0d_array
def asinh(x: np.ndarray, /, *, out: Optional[np.ndarray] = None) -> np.ndarray:
return np.arcsinh(x, out=out)
asinh.support_native_out = True
@_scalar_output_to_0d_array
def atan(x: np.ndarray, /, *, out: Optional[np.ndarray] = None) -> np.ndarray:
return np.arctan(x, out=out)
atan.support_native_out = True
@_scalar_output_to_0d_array
@with_unsupported_dtypes({"1.26.3 and below": ("complex",)}, backend_version)
def atan2(
x1: np.ndarray, x2: np.ndarray, /, *, out: Optional[np.ndarray] = None
) -> np.ndarray:
x1, x2 = ivy.promote_types_of_inputs(x1, x2)
return np.arctan2(x1, x2, out=out)
atan2.support_native_out = True
@_scalar_output_to_0d_array
def atanh(x: np.ndarray, /, *, out: Optional[np.ndarray] = None) -> np.ndarray:
return np.arctanh(x, out=out)
atanh.support_native_out = True
@_scalar_output_to_0d_array
@with_unsupported_dtypes({"1.26.3 and below": ("complex",)}, backend_version)
def bitwise_and(
x1: Union[int, bool, np.ndarray],
x2: Union[int, bool, np.ndarray],
/,
*,
out: Optional[np.ndarray] = None,
) -> np.ndarray:
x1, x2 = ivy.promote_types_of_inputs(x1, x2, array_api_promotion=True)
return np.bitwise_and(x1, x2, out=out)
bitwise_and.support_native_out = True
@_scalar_output_to_0d_array
@with_unsupported_dtypes({"1.26.3 and below": ("complex",)}, backend_version)
def bitwise_invert(
x: Union[int, bool, np.ndarray], /, *, out: Optional[np.ndarray] = None
) -> np.ndarray:
return np.invert(x, out=out)
bitwise_invert.support_native_out = True
@_scalar_output_to_0d_array
@with_unsupported_dtypes({"1.26.3 and below": ("complex",)}, backend_version)
def bitwise_left_shift(
x1: Union[int, bool, np.ndarray],
x2: Union[int, bool, np.ndarray],
/,
*,
out: Optional[np.ndarray] = None,
) -> np.ndarray:
x1, x2 = ivy.promote_types_of_inputs(x1, x2, array_api_promotion=True)
return np.left_shift(x1, x2, out=out)
bitwise_left_shift.support_native_out = True
@_scalar_output_to_0d_array
@with_unsupported_dtypes({"1.26.3 and below": ("complex",)}, backend_version)
def bitwise_or(
x1: Union[int, bool, np.ndarray],
x2: Union[int, bool, np.ndarray],
/,
*,
out: Optional[np.ndarray] = None,
) -> np.ndarray:
x1, x2 = ivy.promote_types_of_inputs(x1, x2, array_api_promotion=True)
return np.bitwise_or(x1, x2, out=out)
bitwise_or.support_native_out = True
@_scalar_output_to_0d_array
@with_unsupported_dtypes({"1.26.3 and below": ("complex",)}, backend_version)
def bitwise_right_shift(
x1: Union[int, bool, np.ndarray],
x2: Union[int, bool, np.ndarray],
/,
*,
out: Optional[np.ndarray] = None,
) -> np.ndarray:
x1, x2 = ivy.promote_types_of_inputs(x1, x2, array_api_promotion=True)
return np.right_shift(x1, x2, out=out)
bitwise_right_shift.support_native_out = True
@_scalar_output_to_0d_array
@with_unsupported_dtypes({"1.26.3 and below": ("complex",)}, backend_version)
def bitwise_xor(
x1: Union[int, bool, np.ndarray],
x2: Union[int, bool, np.ndarray],
/,
*,
out: Optional[np.ndarray] = None,
) -> np.ndarray:
x1, x2 = ivy.promote_types_of_inputs(x1, x2, array_api_promotion=True)
return np.bitwise_xor(x1, x2, out=out)
bitwise_xor.support_native_out = True
@with_unsupported_dtypes({"1.26.3 and below": ("complex",)}, backend_version)
@_scalar_output_to_0d_array
def ceil(x: np.ndarray, /, *, out: Optional[np.ndarray] = None) -> np.ndarray:
if "int" in str(x.dtype):
ret = np.copy(x)
else:
return np.ceil(x, out=out)
if ivy.exists(out):
return ivy.inplace_update(out, ret)
return ret
ceil.support_native_out = True
@_scalar_output_to_0d_array
def cos(x: np.ndarray, /, *, out: Optional[np.ndarray] = None) -> np.ndarray:
return np.cos(x, out=out)
cos.support_native_out = True
@with_unsupported_dtypes({"1.26.3 and below": ("float16",)}, backend_version)
@_scalar_output_to_0d_array
def cosh(x: np.ndarray, /, *, out: Optional[np.ndarray] = None) -> np.ndarray:
return np.cosh(x, out=out)
cosh.support_native_out = True
@_scalar_output_to_0d_array
def divide(
x1: Union[float, np.ndarray],
x2: Union[float, np.ndarray],
/,
*,
out: Optional[np.ndarray] = None,
) -> np.ndarray:
x1, x2 = ivy.promote_types_of_inputs(x1, x2)
ret = np.divide(x1, x2, out=out)
if ivy.is_float_dtype(x1.dtype) or ivy.is_complex_dtype(x1.dtype):
ret = np.asarray(ret, dtype=x1.dtype)
else:
ret = np.asarray(ret, dtype=ivy.default_float_dtype(as_native=True))
return ret
divide.support_native_out = True
@_scalar_output_to_0d_array
def equal(
x1: Union[float, np.ndarray],
x2: Union[float, np.ndarray],
/,
*,
out: Optional[np.ndarray] = None,
) -> np.ndarray:
x1, x2 = ivy.promote_types_of_inputs(x1, x2)
return np.equal(x1, x2, out=out)
equal.support_native_out = True
@_scalar_output_to_0d_array
def exp(x: np.ndarray, /, *, out: Optional[np.ndarray] = None) -> np.ndarray:
return np.exp(x, out=out)
exp.support_native_out = True
def exp2(
x: Union[np.ndarray, float, list, tuple],
/,
*,
out: Optional[np.ndarray] = None,
) -> np.ndarray:
return np.exp2(x, out=out)
exp2.support_native_out = True
@_scalar_output_to_0d_array
def expm1(x: np.ndarray, /, *, out: Optional[np.ndarray] = None) -> np.ndarray:
return np.expm1(x, out=out)
expm1.support_native_out = True
@_scalar_output_to_0d_array
@with_unsupported_dtypes({"1.26.3 and below": ("complex",)}, backend_version)
def floor(x: np.ndarray, /, *, out: Optional[np.ndarray] = None) -> np.ndarray:
if "int" in str(x.dtype):
ret = np.copy(x)
else:
return np.floor(x, out=out)
if ivy.exists(out):
return ivy.inplace_update(out, ret)
return ret
floor.support_native_out = True
@_scalar_output_to_0d_array
@with_unsupported_dtypes({"1.26.3 and below": ("complex",)}, backend_version)
def floor_divide(
x1: Union[float, np.ndarray],
x2: Union[float, np.ndarray],
/,
*,
out: Optional[np.ndarray] = None,
) -> np.ndarray:
x1, x2 = ivy.promote_types_of_inputs(x1, x2)
return np.floor(np.divide(x1, x2)).astype(x1.dtype)
@_scalar_output_to_0d_array
def fmin(
x1: np.ndarray,
x2: np.ndarray,
/,
*,
out: Optional[np.ndarray] = None,
) -> np.ndarray:
x1, x2 = promote_types_of_inputs(x1, x2)
return np.fmin(
x1,
x2,
out=None,
where=True,
casting="same_kind",
order="K",
dtype=None,
subok=True,
)
fmin.support_native_out = True
@_scalar_output_to_0d_array
def greater(
x1: Union[float, np.ndarray],
x2: Union[float, np.ndarray],
/,
*,
out: Optional[np.ndarray] = None,
) -> np.ndarray:
x1, x2 = ivy.promote_types_of_inputs(x1, x2)
return np.greater(x1, x2, out=out)
greater.support_native_out = True
@_scalar_output_to_0d_array
def greater_equal(
x1: Union[float, np.ndarray],
x2: Union[float, np.ndarray],
/,
*,
out: Optional[np.ndarray] = None,
) -> np.ndarray:
x1, x2 = ivy.promote_types_of_inputs(x1, x2)
return np.greater_equal(x1, x2, out=out)
greater_equal.support_native_out = True
@_scalar_output_to_0d_array
def isfinite(x: np.ndarray, /, *, out: Optional[np.ndarray] = None) -> np.ndarray:
return np.isfinite(x, out=out)
isfinite.support_native_out = True
@with_unsupported_dtypes({"1.26.3 and below": ("complex",)}, backend_version)
@_scalar_output_to_0d_array
def isinf(
x: np.ndarray,
/,
*,
detect_positive: bool = True,
detect_negative: bool = True,
out: Optional[np.ndarray] = None,
) -> np.ndarray:
if detect_negative and detect_positive:
return np.isinf(x)
elif detect_negative:
return np.isneginf(x)
elif detect_positive:
return np.isposinf(x)
return np.full_like(x, False, dtype=bool)
@_scalar_output_to_0d_array
def isnan(x: np.ndarray, /, *, out: Optional[np.ndarray] = None) -> np.ndarray:
return np.isnan(x, out=out)
isnan.support_native_out = True
@_scalar_output_to_0d_array
def lcm(
x1: np.ndarray,
x2: np.ndarray,
/,
*,
out: Optional[np.ndarray] = None,
) -> np.ndarray:
x1, x2 = promote_types_of_inputs(x1, x2)
return np.lcm(
x1,
x2,
out=out,
)
lcm.support_native_out = True
@_scalar_output_to_0d_array
def less(
x1: Union[float, np.ndarray],
x2: Union[float, np.ndarray],
/,
*,
out: Optional[np.ndarray] = None,
) -> np.ndarray:
x1, x2 = ivy.promote_types_of_inputs(x1, x2)
return np.less(x1, x2, out=out)
less.support_native_out = True
@_scalar_output_to_0d_array
def less_equal(
x1: Union[float, np.ndarray],
x2: Union[float, np.ndarray],
/,
*,
out: Optional[np.ndarray] = None,
) -> np.ndarray:
x1, x2 = ivy.promote_types_of_inputs(x1, x2)
return np.less_equal(x1, x2, out=out)
less_equal.support_native_out = True
@_scalar_output_to_0d_array
def log(x: np.ndarray, /, *, out: Optional[np.ndarray] = None) -> np.ndarray:
return np.log(x, out=out)
log.support_native_out = True
@_scalar_output_to_0d_array
def log10(x: np.ndarray, /, *, out: Optional[np.ndarray] = None) -> np.ndarray:
return np.log10(x, out=out)
log10.support_native_out = True
@_scalar_output_to_0d_array
def log1p(x: np.ndarray, /, *, out: Optional[np.ndarray] = None) -> np.ndarray:
return np.log1p(x, out=out)
log1p.support_native_out = True
@_scalar_output_to_0d_array
def log2(x: np.ndarray, /, *, out: Optional[np.ndarray] = None) -> np.ndarray:
return np.log2(x, out=out)
log2.support_native_out = True
@_scalar_output_to_0d_array
@with_unsupported_dtypes({"1.26.3 and below": ("complex",)}, backend_version)
def logaddexp(
x1: np.ndarray, x2: np.ndarray, /, *, out: Optional[np.ndarray] = None
) -> np.ndarray:
x1, x2 = ivy.promote_types_of_inputs(x1, x2)
return np.logaddexp(x1, x2, out=out)
logaddexp.support_native_out = True
def logaddexp2(
x1: Union[np.ndarray, int, list, tuple],
x2: Union[np.ndarray, int, list, tuple],
/,
*,
out: Optional[np.ndarray] = None,
) -> np.ndarray:
x1, x2 = promote_types_of_inputs(x1, x2)
if not ivy.is_float_dtype(x1):
x1 = x1.astype(ivy.default_float_dtype(as_native=True))
x2 = x2.astype(ivy.default_float_dtype(as_native=True))
return np.logaddexp2(x1, x2, out=out)
logaddexp2.support_native_out = True
@_scalar_output_to_0d_array
def logical_and(
x1: np.ndarray, x2: np.ndarray, /, *, out: Optional[np.ndarray] = None
) -> np.ndarray:
return np.logical_and(x1, x2, out=out)
logical_and.support_native_out = True
@_scalar_output_to_0d_array
def logical_not(x: np.ndarray, /, *, out: Optional[np.ndarray] = None) -> np.ndarray:
return np.logical_not(x, out=out)
logical_not.support_native_out = True
@_scalar_output_to_0d_array
def logical_or(
x1: np.ndarray, x2: np.ndarray, /, *, out: Optional[np.ndarray] = None
) -> np.ndarray:
return np.logical_or(x1, x2, out=out)
logical_or.support_native_out = True
@_scalar_output_to_0d_array
def logical_xor(
x1: np.ndarray, x2: np.ndarray, /, *, out: Optional[np.ndarray] = None
) -> np.ndarray:
return np.logical_xor(x1, x2, out=out)
logical_xor.support_native_out = True
@_scalar_output_to_0d_array
def multiply(
x1: Union[float, np.ndarray],
x2: Union[float, np.ndarray],
/,
*,
out: Optional[np.ndarray] = None,
) -> np.ndarray:
x1, x2 = ivy.promote_types_of_inputs(x1, x2)
return np.multiply(x1, x2, out=out)
multiply.support_native_out = True
@_scalar_output_to_0d_array
def negative(
x: Union[float, np.ndarray], /, *, out: Optional[np.ndarray] = None
) -> np.ndarray:
return np.negative(x, out=out)
negative.support_native_out = True
@_scalar_output_to_0d_array
def not_equal(
x1: Union[float, np.ndarray],
x2: Union[float, np.ndarray],
/,
*,
out: Optional[np.ndarray] = None,
) -> np.ndarray:
x1, x2 = ivy.promote_types_of_inputs(x1, x2)
return np.not_equal(x1, x2, out=out)
not_equal.support_native_out = True
@_scalar_output_to_0d_array
def positive(
x: Union[float, np.ndarray], /, *, out: Optional[np.ndarray] = None
) -> np.ndarray:
return np.positive(x, out=out)
positive.support_native_out = True
@_scalar_output_to_0d_array
def pow(
x1: np.ndarray,
x2: Union[int, float, np.ndarray],
/,
*,
out: Optional[np.ndarray] = None,
) -> np.ndarray:
if ivy.is_complex_dtype(x1) and ivy.any(ivy.isinf(x2)):
ret = np.power(x1, x2)
return np.where(np.isinf(x2), np.nan + np.nan * 1j if x2 < 0 else -0 * 1j, ret)
x1, x2 = ivy.promote_types_of_inputs(x1, x2)
if ivy.is_int_dtype(x1) and ivy.any(x2 < 0):
return np.float_power(x1, x2, casting="unsafe").astype(x1.dtype)
return np.power(x1, x2)
pow.support_native_out = True
@_scalar_output_to_0d_array
@with_unsupported_dtypes({"1.26.3 and below": ("complex",)}, backend_version)
def remainder(
x1: Union[float, np.ndarray],
x2: Union[float, np.ndarray],
/,
*,
modulus: bool = True,
out: Optional[np.ndarray] = None,
) -> np.ndarray:
x1, x2 = ivy.promote_types_of_inputs(x1, x2)
if not modulus:
res = x1 / x2
res_floored = np.where(res >= 0, np.floor(res), np.ceil(res))
diff = np.asarray(res - res_floored, dtype=res.dtype)
diff, x2 = ivy.promote_types_of_inputs(diff, x2)
return np.asarray(np.round(diff * x2), dtype=x1.dtype)
return np.remainder(x1, x2, out=out)
remainder.support_native_out = True
@_scalar_output_to_0d_array
def round(
x: np.ndarray, /, *, decimals: int = 0, out: Optional[np.ndarray] = None
) -> np.ndarray:
if "int" in str(x.dtype):
ret = np.copy(x)
else:
ret = np.round(x, decimals=decimals, out=out)
if ivy.exists(out):
return ivy.inplace_update(out, ret)
return ret
round.support_native_out = True
def _abs_variant_sign(x):
return np.divide(x, np.abs(x), where=x != 0)
@_scalar_output_to_0d_array
def sign(
x: np.ndarray,
/,
*,
np_variant: Optional[bool] = True,
out: Optional[np.ndarray] = None,
) -> np.ndarray:
if "complex" in str(x.dtype):
return np.sign(x, out=out) if np_variant else _abs_variant_sign(x)
return np.sign(x, out=out)
sign.support_native_out = True
@_scalar_output_to_0d_array
def sin(x: np.ndarray, /, *, out: Optional[np.ndarray] = None) -> np.ndarray:
return np.sin(x, out=out)
sin.support_native_out = True
@_scalar_output_to_0d_array
def sinh(x: np.ndarray, /, *, out: Optional[np.ndarray] = None) -> np.ndarray:
return np.sinh(x, out=out)
sinh.support_native_out = True
@_scalar_output_to_0d_array
def sqrt(x: np.ndarray, /, *, out: Optional[np.ndarray] = None) -> np.ndarray:
return np.sqrt(x, out=out)
sqrt.support_native_out = True
@_scalar_output_to_0d_array
def square(x: np.ndarray, /, *, out: Optional[np.ndarray] = None) -> np.ndarray:
return np.square(x, out=out)
square.support_native_out = True
@_scalar_output_to_0d_array
def subtract(
x1: Union[float, np.ndarray],
x2: Union[float, np.ndarray],
/,
*,
alpha: Optional[Union[int, float]] = None,
out: Optional[np.ndarray] = None,
) -> np.ndarray:
x1, x2 = ivy.promote_types_of_inputs(x1, x2)
if alpha not in (1, None):
ivy.set_array_mode(False)
x2 = multiply(x2, alpha)
ivy.unset_array_mode()
return np.subtract(x1, x2, out=out)
subtract.support_native_out = True
@_scalar_output_to_0d_array
def trapz(
y: np.ndarray,
/,
*,
x: Optional[np.ndarray] = None,
dx: float = 1.0,
axis: int = -1,
out: Optional[np.ndarray] = None,
) -> np.ndarray:
return np.trapz(y, x=x, dx=dx, axis=axis)
trapz.support_native_out = False
@_scalar_output_to_0d_array
def tan(x: np.ndarray, /, *, out: Optional[np.ndarray] = None) -> np.ndarray:
return np.tan(x, out=out)
tan.support_native_out = True
@_scalar_output_to_0d_array
def tanh(
x: np.ndarray, /, *, complex_mode="jax", out: Optional[np.ndarray] = None
) -> np.ndarray:
return np.tanh(x, out=out)
tanh.support_native_out = True
@_scalar_output_to_0d_array
def trunc(x: np.ndarray, /, *, out: Optional[np.ndarray] = None) -> np.ndarray:
if "int" in str(x.dtype):
ret = np.copy(x)
else:
return np.trunc(x, out=out)
if ivy.exists(out):
return ivy.inplace_update(out, ret)
return ret
trunc.support_native_out = True
# Extra #
# ------#
@_scalar_output_to_0d_array
def erf(x, /, *, out: Optional[np.ndarray] = None):
a1 = 0.254829592
a2 = -0.284496736
a3 = 1.421413741
a4 = -1.453152027
a5 = 1.061405429
p = 0.3275911
sign = np.sign(x)
x = np.abs(x)
# A&S formula 7.1.26
t = 1.0 / (1.0 + p * x)
y = 1.0 - (((((a5 * t + a4) * t) + a3) * t + a2) * t + a1) * t * np.exp(-x * x)
ret = sign * y
if hasattr(x, "dtype"):
ret = np.asarray(ret, dtype=x.dtype)
if ivy.exists(out):
return ivy.inplace_update(out, ret)
return ret
erf.support_native_out = True
@_scalar_output_to_0d_array
def maximum(
x1: Union[float, np.ndarray],
x2: Union[float, np.ndarray],
/,
*,
use_where: bool = True,
out: Optional[np.ndarray] = None,
):
x1, x2 = ivy.promote_types_of_inputs(x1, x2)
if use_where:
ret = np.where(x1 >= x2, x1, x2)
if ivy.exists(out):
return ivy.inplace_update(out, ret)
return ret
return np.maximum(x1, x2, out=out)
maximum.support_native_out = True
@_scalar_output_to_0d_array
def minimum(
x1: Union[float, np.ndarray],
x2: Union[float, np.ndarray],
/,
*,
use_where: bool = True,
out: Optional[np.ndarray] = None,
) -> np.ndarray:
x1, x2 = ivy.promote_types_of_inputs(x1, x2)
if use_where:
ret = np.where(x1 <= x2, x1, x2)
if ivy.exists(out):
return ivy.inplace_update(out, ret)
return ret
return np.minimum(x1, x2, out=out)
minimum.support_native_out = True
@_scalar_output_to_0d_array
def reciprocal(
x: Union[float, np.ndarray], /, *, out: Optional[np.ndarray] = None
) -> np.ndarray:
numerator = np.ones_like(x)
return np.true_divide(numerator, x, out=out)
reciprocal.support_native_out = True
@_scalar_output_to_0d_array
@with_unsupported_dtypes({"1.26.3 and below": ("complex",)}, backend_version)
def deg2rad(x: np.ndarray, /, *, out: Optional[np.ndarray] = None) -> np.ndarray:
return np.deg2rad(x, out=out)
deg2rad.support_native_out = True
@_scalar_output_to_0d_array
@with_unsupported_dtypes({"1.26.3 and below": ("complex",)}, backend_version)
def rad2deg(x: np.ndarray, /, *, out: Optional[np.ndarray] = None) -> np.ndarray:
return np.rad2deg(x, out=out)
rad2deg.support_native_out = True
@_scalar_output_to_0d_array
def isreal(x: np.ndarray, /, *, out: Optional[np.ndarray] = None) -> np.ndarray:
return np.isreal(x)
isreal.support_native_out = False
@_scalar_output_to_0d_array
@with_unsupported_dtypes({"1.26.3 and below": ("complex",)}, backend_version)
def fmod(
x1: np.ndarray,
x2: np.ndarray,
/,
*,
out: Optional[np.ndarray] = None,
) -> np.ndarray:
x1, x2 = promote_types_of_inputs(x1, x2)
return np.fmod(
x1,
x2,
out=None,
)
fmod.support_native_out = True
def angle(
z: np.ndarray,
/,
*,
deg: bool = False,
out: Optional[np.ndarray] = None,
) -> np.ndarray:
return np.angle(z, deg=deg)
angle.support_native_out = False
def gcd(
x1: Union[np.ndarray, int, list, tuple],
x2: Union[np.ndarray, float, list, tuple],
/,
*,
out: Optional[np.ndarray] = None,
) -> np.ndarray:
x1, x2 = promote_types_of_inputs(x1, x2)
return np.gcd(x1, x2, out=out)
gcd.support_native_out = True
def imag(
val: np.ndarray,
/,
*,
out: Optional[np.ndarray] = None,
) -> np.ndarray:
return np.imag(val)
imag.support_native_out = False
def nan_to_num(
x: np.ndarray,
/,
*,
copy: bool = True,
nan: Union[float, int] = 0.0,
posinf: Optional[Union[float, int]] = None,
neginf: Optional[Union[float, int]] = None,
out: Optional[np.ndarray] = None,
) -> np.ndarray:
return np.nan_to_num(x, copy=copy, nan=nan, posinf=posinf, neginf=neginf)
nan_to_num.support_native_out = False
def real(x: np.ndarray, /, *, out: Optional[np.ndarray] = None) -> np.ndarray:
return np.real(x)
| ivy/ivy/functional/backends/numpy/elementwise.py/0 | {
"file_path": "ivy/ivy/functional/backends/numpy/elementwise.py",
"repo_id": "ivy",
"token_count": 10442
} | 19 |
# global
from typing import Optional, Tuple
import numpy as np
# local
from ivy.func_wrapper import with_supported_dtypes
from . import backend_version
@with_supported_dtypes({"1.26.3 and below": ("int32", "int64")}, backend_version)
def unravel_index(
indices: np.ndarray,
shape: Tuple[int],
/,
*,
out: Optional[np.ndarray] = None,
) -> Tuple[np.ndarray]:
ret = np.asarray(np.unravel_index(indices, shape), dtype=np.int32)
return tuple(ret)
unravel_index.support_native_out = False
| ivy/ivy/functional/backends/numpy/experimental/searching.py/0 | {
"file_path": "ivy/ivy/functional/backends/numpy/experimental/searching.py",
"repo_id": "ivy",
"token_count": 196
} | 20 |
# global
import numpy as np
from typing import Optional, Literal, Union, List
# local
import ivy
from ivy.func_wrapper import with_unsupported_dtypes
from . import backend_version
def argsort(
x: np.ndarray,
/,
*,
axis: int = -1,
descending: bool = False,
stable: bool = True,
out: Optional[np.ndarray] = None,
) -> np.ndarray:
kind = "stable" if stable else "quicksort"
return (
np.argsort(-x, axis=axis, kind=kind)
if descending
else np.argsort(x, axis=axis, kind=kind)
)
def sort(
x: np.ndarray,
/,
*,
axis: int = -1,
descending: bool = False,
stable: bool = True,
out: Optional[np.ndarray] = None,
) -> np.ndarray:
kind = "stable" if stable else "quicksort"
ret = np.asarray(np.sort(x, axis=axis, kind=kind))
if descending:
ret = np.asarray(np.flip(ret, axis))
return ret
# msort
@with_unsupported_dtypes({"1.26.3 and below": ("complex",)}, backend_version)
def msort(
a: Union[np.ndarray, list, tuple], /, *, out: Optional[np.ndarray] = None
) -> np.ndarray:
return np.msort(a)
msort.support_native_out = False
def searchsorted(
x: np.ndarray,
v: np.ndarray,
/,
*,
side: Literal["left", "right"] = "left",
sorter: Optional[Union[np.ndarray, List[int]]] = None,
ret_dtype: np.dtype = np.int64,
out: Optional[np.ndarray] = None,
) -> np.ndarray:
assert ivy.is_int_dtype(ret_dtype), TypeError(
"only Integer data types are supported for ret_dtype."
)
is_sorter_provided = sorter is not None
if is_sorter_provided:
assert ivy.is_int_dtype(sorter.dtype), TypeError(
f"Only signed integer data type for sorter is allowed, got {sorter.dtype}."
)
if x.ndim != 1:
assert x.shape[:-1] == v.shape[:-1], RuntimeError(
"the first N-1 dimensions of x array and v array "
f"must match, got {x.shape} and {v.shape}"
)
if is_sorter_provided:
x = np.take_along_axis(x, sorter, axis=-1)
original_shape = v.shape
x = x.reshape(-1, x.shape[-1])
v = v.reshape(-1, v.shape[-1])
out_array = np.empty_like(v)
for i in range(x.shape[0]):
out_array[i] = np.searchsorted(x[i], v[i], side=side)
ret = out_array.reshape(original_shape)
else:
ret = np.searchsorted(x, v, side=side, sorter=sorter)
return ret.astype(ret_dtype)
| ivy/ivy/functional/backends/numpy/sorting.py/0 | {
"file_path": "ivy/ivy/functional/backends/numpy/sorting.py",
"repo_id": "ivy",
"token_count": 1121
} | 21 |
import tensorflow as tf
def if_else(cond, body_fn, orelse_fn, vars):
# back-compatibility
if isinstance(cond, bool):
v = cond
def cond(*_):
return v
cond = bool(cond(**vars))
return tf.cond(cond, lambda: body_fn(**vars), lambda: orelse_fn(**vars))
# use pythonic placeholder until the tracer supports callable arguments
def while_loop(test_fn, body_fn, vars):
def body_fn_wrapper(*loop_vars):
return body_fn(*loop_vars)
def test_fn_wrapper(*loop_vars):
return test_fn(*loop_vars)
if not vars:
vars = (0,)
elif isinstance(vars, dict):
vars = list(vars.values())
return tf.while_loop(test_fn_wrapper, body_fn_wrapper, loop_vars=vars)
def for_loop(
iterable,
body_fn,
vars,
):
iterator = iterable.__iter__()
vars_dict = _tuple_to_dict(vars)
def test_fn(*args):
nonlocal iterator, body_fn, vars_dict
try:
val = iterator.__next__()
except StopIteration:
return False
vars_tuple = body_fn(val, _dict_to_tuple(vars_dict))
for k in range(len(vars_tuple)):
vars_dict[k] = vars_tuple[k]
return True
def empty_function(*args):
return (0,)
while_loop(test_fn, empty_function, ())
return _dict_to_tuple(vars_dict)
def _tuple_to_dict(t):
return {k: t[k] for k in range(len(t))}
def _dict_to_tuple(d):
return tuple(d[k] for k in d)
| ivy/ivy/functional/backends/tensorflow/control_flow_ops.py/0 | {
"file_path": "ivy/ivy/functional/backends/tensorflow/control_flow_ops.py",
"repo_id": "ivy",
"token_count": 691
} | 22 |
# global
from collections import namedtuple
from typing import (
Iterable,
Union,
Optional,
Sequence,
Tuple,
NamedTuple,
List,
Literal,
Callable,
Any,
)
from numbers import Number
import tensorflow as tf
# local
from ivy.func_wrapper import with_unsupported_dtypes, handle_out_argument
from .. import backend_version
import ivy
from ivy.functional.ivy.experimental.manipulation import _to_tf_padding
def moveaxis(
a: Union[tf.Tensor, tf.Variable],
source: Union[int, Sequence[int]],
destination: Union[int, Sequence[int]],
/,
*,
copy: Optional[bool] = None,
out: Optional[Union[tf.Tensor, tf.Variable]] = None,
) -> Union[tf.Tensor, tf.Variable]:
return tf.experimental.numpy.moveaxis(a, source, destination)
@with_unsupported_dtypes({"2.15.0 and below": ("bfloat16",)}, backend_version)
def heaviside(
x1: Union[tf.Tensor, tf.Variable],
x2: Union[tf.Tensor, tf.Variable],
/,
*,
out: Optional[Union[tf.Tensor, tf.Variable]] = None,
) -> Union[tf.Tensor, tf.Variable]:
return tf.cast(tf.experimental.numpy.heaviside(x1, x2), x1.dtype)
def flipud(
m: Union[tf.Tensor, tf.Variable],
/,
*,
copy: Optional[bool] = None,
out: Optional[Union[tf.Tensor, tf.Variable]] = None,
) -> Union[tf.Tensor, tf.Variable]:
return tf.experimental.numpy.flipud(m)
def vstack(
arrays: Union[Sequence[tf.Tensor], Sequence[tf.Variable]],
/,
*,
out: Optional[Union[tf.Tensor, tf.Variable]] = None,
) -> Union[tf.Tensor, tf.Variable]:
return tf.experimental.numpy.vstack(arrays)
def hstack(
arrays: Union[Sequence[tf.Tensor], Sequence[tf.Variable]],
/,
*,
out: Optional[Union[tf.Tensor, tf.Variable]] = None,
) -> Union[tf.Tensor, tf.Variable]:
return tf.experimental.numpy.hstack(arrays)
def rot90(
m: Union[tf.Tensor, tf.Variable],
/,
*,
copy: Optional[bool] = None,
k: int = 1,
axes: Tuple[int, int] = (0, 1),
out: Union[tf.Tensor, tf.Variable] = None,
) -> Union[tf.Tensor, tf.Variable]:
return tf.experimental.numpy.rot90(m, k, axes)
@with_unsupported_dtypes({"2.15.0 and below": ("unsigned", "complex")}, backend_version)
def top_k(
x: tf.Tensor,
k: int,
/,
*,
axis: int = -1,
largest: bool = True,
sorted: bool = True,
out: Optional[Tuple[tf.Tensor, tf.Tensor]] = None,
) -> Tuple[tf.Tensor, tf.Tensor]:
k = min(k, x.shape[axis])
if not largest:
indices = tf.experimental.numpy.argsort(x, axis=axis)
indices = tf.experimental.numpy.take(
indices, tf.experimental.numpy.arange(k), axis=axis
)
indices = tf.dtypes.cast(indices, tf.int32)
else:
indices = tf.experimental.numpy.argsort(-x, axis=axis)
indices = tf.experimental.numpy.take(
indices, tf.experimental.numpy.arange(k), axis=axis
)
indices = tf.dtypes.cast(indices, tf.int32)
if not sorted:
indices = tf.sort(indices, axis=axis)
topk_res = NamedTuple("top_k", [("values", tf.Tensor), ("indices", tf.Tensor)])
val = tf.experimental.numpy.take_along_axis(x, indices, axis=axis)
indices = tf.dtypes.cast(indices, tf.int64)
return topk_res(val, indices)
def fliplr(
m: Union[tf.Tensor, tf.Variable],
/,
*,
copy: Optional[bool] = None,
out: Optional[Union[tf.Tensor, tf.Variable]] = None,
) -> Union[tf.Tensor, tf.Variable]:
return tf.experimental.numpy.fliplr(m)
@with_unsupported_dtypes({"2.15.0 and below": ("bfloat16",)}, backend_version)
def i0(
x: Union[tf.Tensor, tf.Variable],
/,
*,
out: Optional[Union[tf.Tensor, tf.Variable]] = None,
) -> Union[tf.Tensor, tf.Variable]:
return tf.math.bessel_i0(x, name=None)
def vsplit(
ary: Union[tf.Tensor, tf.Variable],
indices_or_sections: Union[int, Sequence[int], tf.Tensor, tf.Variable],
/,
*,
copy: Optional[bool] = None,
) -> List[Union[tf.Tensor, tf.Variable]]:
if len(ary.shape) < 2:
raise ivy.utils.exceptions.IvyError(
"vsplit only works on arrays of 2 or more dimensions"
)
return ivy.split(ary, num_or_size_splits=indices_or_sections, axis=0)
def dsplit(
ary: Union[tf.Tensor, tf.Variable],
indices_or_sections: Union[int, Sequence[int], tf.Tensor, tf.Variable],
/,
*,
copy: Optional[bool] = None,
) -> List[Union[tf.Tensor, tf.Variable]]:
if len(ary.shape) < 3:
raise ivy.utils.exceptions.IvyError(
"dsplit only works on arrays of 3 or more dimensions"
)
return ivy.split(ary, num_or_size_splits=indices_or_sections, axis=2)
def atleast_1d(
*arys: Union[tf.Tensor, tf.Variable, bool, Number],
copy: Optional[bool] = None,
) -> List[Union[tf.Tensor, tf.Variable]]:
return tf.experimental.numpy.atleast_1d(*arys)
def dstack(
arrays: Union[Sequence[tf.Tensor], Sequence[tf.Variable]],
/,
*,
out: Optional[Union[tf.Tensor, tf.Variable]] = None,
) -> Union[tf.Tensor, tf.Variable]:
return tf.experimental.numpy.dstack(arrays)
def atleast_2d(
*arys: Union[tf.Tensor, tf.Variable],
copy: Optional[bool] = None,
) -> List[Union[tf.Tensor, tf.Variable]]:
return tf.experimental.numpy.atleast_2d(*arys)
def atleast_3d(
*arys: Union[tf.Tensor, tf.Variable, bool, Number],
copy: Optional[bool] = None,
) -> List[Union[tf.Tensor, tf.Variable]]:
return tf.experimental.numpy.atleast_3d(*arys)
def take_along_axis(
arr: Union[tf.Tensor, tf.Variable],
indices: Union[tf.Tensor, tf.Variable],
axis: int,
/,
*,
mode: str = "fill",
out: Optional[Union[tf.Tensor, tf.Variable]] = None,
) -> Union[tf.Tensor, tf.Variable]:
if len(arr.shape) != len(indices.shape):
raise ivy.utils.exceptions.IvyException(
"arr and indices must have the same number of dimensions;"
+ f" got {len(arr.shape)} vs {len(indices.shape)}"
)
indices = tf.dtypes.cast(indices, tf.int32)
if mode not in ["clip", "fill", "drop"]:
raise ValueError(
f"Invalid mode '{mode}'. Valid modes are 'clip', 'fill', 'drop'."
)
arr_shape = arr.shape
if axis < 0:
axis += len(arr.shape)
if mode == "clip":
max_index = arr.shape[axis] - 1
indices = tf.clip_by_value(indices, 0, max_index)
elif mode in ("fill", "drop"):
if "float" in str(arr.dtype) or "complex" in str(arr.dtype):
fill_value = tf.constant(float("nan"), dtype=arr.dtype)
elif "uint" in str(arr.dtype):
fill_value = tf.constant(arr.dtype.max, dtype=arr.dtype)
elif "int" in str(arr.dtype):
fill_value = tf.constant(-arr.dtype.max - 1, dtype=arr.dtype)
else:
raise TypeError(
f"Invalid dtype '{arr.dtype}'. Valid dtypes are 'float', 'complex',"
" 'uint', 'int'."
)
indices = tf.where((indices < 0) | (indices >= arr.shape[axis]), -1, indices)
arr_shape = list(arr_shape)
arr_shape[axis] = 1
fill_arr = tf.fill(arr_shape, fill_value)
arr = tf.concat([arr, fill_arr], axis=axis)
return tf.experimental.numpy.take_along_axis(arr, indices, axis)
def hsplit(
ary: Union[tf.Tensor, tf.Variable],
indices_or_sections: Union[int, Tuple[int, ...]],
/,
*,
copy: Optional[bool] = None,
) -> List[Union[tf.Tensor, tf.Variable]]:
if len(ary.shape) == 1:
return ivy.split(ary, num_or_size_splits=indices_or_sections, axis=0)
return ivy.split(ary, num_or_size_splits=indices_or_sections, axis=1)
def broadcast_shapes(
*shapes: Union[List[int], List[Tuple]],
) -> Tuple[int, ...]:
if len(shapes) > 1:
desired_shape = tf.broadcast_dynamic_shape(shapes[0], shapes[1])
if len(shapes) > 2:
for i in range(2, len(shapes)):
desired_shape = tf.broadcast_dynamic_shape(desired_shape, shapes[i])
else:
return [shapes[0]]
return tuple(desired_shape.numpy().tolist())
def pad(
input: Union[tf.Tensor, tf.Variable],
pad_width: Union[Iterable[Tuple[int]], int],
/,
*,
mode: Union[
Literal[
"constant",
"dilated",
"edge",
"linear_ramp",
"maximum",
"mean",
"median",
"minimum",
"reflect",
"symmetric",
"wrap",
"empty",
],
Callable,
] = "constant",
stat_length: Union[Iterable[Tuple[int]], int] = 1,
constant_values: Union[Iterable[Tuple[Number]], Number] = 0,
end_values: Union[Iterable[Tuple[Number]], Number] = 0,
reflect_type: Literal["even", "odd"] = "even",
**kwargs: Optional[Any],
) -> Union[tf.Tensor, tf.Variable]:
pad_width = _to_tf_padding(pad_width, len(input.shape))
if isinstance(constant_values, (tf.Variable, tf.Tensor)):
if constant_values.dtype != input.dtype:
constant_values = tf.cast(constant_values, input.dtype)
return tf.pad(
input,
pad_width,
mode=mode,
constant_values=constant_values,
)
pad.partial_mixed_handler = (
lambda *args, mode="constant", constant_values=0, reflect_type="even", **kwargs: (
_check_tf_pad(args[0].shape, args[1], mode, constant_values, reflect_type)
)
)
def _check_tf_pad(input_shape, pad_width, mode, constant_values, reflect_type):
pad_width = _to_tf_padding(pad_width, len(input_shape))
return isinstance(constant_values, Number) and (
mode == "constant"
or (
reflect_type == "even"
and (
(
mode == "reflect"
and all(
pad_width[i][0] < s and pad_width[i][1] < s
for i, s in enumerate(input_shape)
)
)
or (
mode == "symmetric"
and all(
pad_width[i][0] <= s and pad_width[i][1] <= s
for i, s in enumerate(input_shape)
)
)
)
)
)
def expand(
x: Union[tf.Tensor, tf.Variable],
shape: Union[List[int], List[Tuple]],
/,
*,
copy: Optional[bool] = None,
out: Optional[Union[tf.Tensor, tf.Variable]] = None,
) -> Union[tf.Tensor, tf.Variable]:
shape = list(shape)
n_extra_dims = len(shape) - len(x.shape)
if n_extra_dims > 0:
new_shape = (1,) * n_extra_dims + tuple(x.shape)
x = tf.reshape(x, new_shape)
for i, dim in enumerate(shape):
if dim < 0:
shape[i] = x.shape[i]
return tf.broadcast_to(x, shape)
def concat_from_sequence(
input_sequence: Union[Tuple[tf.Tensor], List[tf.Tensor]],
/,
*,
new_axis: int = 0,
axis: int = 0,
out: Optional[Union[tf.Tensor, tf.Variable]] = None,
) -> Union[tf.Tensor, tf.Variable]:
is_tuple = type(input_sequence) is tuple
if is_tuple:
input_sequence = list(input_sequence)
highest_dtype = input_sequence[0].dtype
for i in input_sequence:
highest_dtype = ivy.as_native_dtype(ivy.promote_types(highest_dtype, i.dtype))
if new_axis == 0:
ret = tf.concat(input_sequence, axis=axis)
return ret
elif new_axis == 1:
ret = tf.stack(input_sequence, axis=axis)
return ret
def unique_consecutive(
x: Union[tf.Tensor, tf.Variable],
/,
*,
axis: Optional[int] = None,
) -> Tuple[
Union[tf.Tensor, tf.Variable],
Union[tf.Tensor, tf.Variable],
Union[tf.Tensor, tf.Variable],
]:
Results = namedtuple(
"Results",
["output", "inverse_indices", "counts"],
)
x_shape = None
if axis is None:
x_shape = x.shape
x = tf.reshape(x, -1)
axis = -1
ndim = len(x.shape)
if axis < 0:
axis += ndim
splits = (
tf.where(
tf.math.reduce_any(
tf.experimental.numpy.diff(x, axis=axis) != 0,
axis=tuple(i for i in tf.range(ndim) if i != axis),
)
)
+ 1
)
if tf.size(splits) > 0:
sub_arrays = tf.experimental.numpy.split(x, tf.reshape(splits, -1), axis=axis)
else:
sub_arrays = [x]
output = tf.concat(
[
tf.raw_ops.UniqueV2(x=sub_array, axis=tf.constant([axis]))[0]
for sub_array in sub_arrays
],
axis=axis,
)
counts = tf.convert_to_tensor([sub_array.shape[axis] for sub_array in sub_arrays])
inverse_indices = tf.repeat(tf.range(len(counts)), counts)
if x_shape:
inverse_indices = tf.reshape(inverse_indices, x_shape)
return Results(
tf.cast(output, x.dtype),
tf.cast(inverse_indices, tf.int64),
tf.cast(counts, tf.int64),
)
def take(
x: Union[int, List, tf.Tensor, tf.Variable],
indices: Union[int, List, tf.Tensor, tf.Variable],
/,
*,
axis: Optional[int] = None,
mode: str = "clip",
fill_value: Optional[Number] = None,
out: Optional[Union[tf.Tensor, tf.Variable]] = None,
) -> Union[tf.Tensor, tf.Variable]:
if mode not in ["raise", "wrap", "clip", "fill"]:
raise ValueError("mode must be one of 'clip', 'raise', 'wrap', or 'fill'")
if not isinstance(x, (tf.Tensor, tf.Variable)):
x = tf.constant(x)
if len(x.shape) == 0:
x = tf.constant([x])
if not isinstance(indices, (tf.Tensor, tf.Variable)):
indices = tf.constant(indices)
if indices.dtype.is_floating:
indices = tf.cast(indices, tf.int64)
# raise
if mode == "raise":
mode = "clip"
if ivy.exists(axis):
if axis >= len(x.shape):
raise tf.errors.InvalidArgumentError(
None,
None,
f"Shape must be at least rank {axis+1} but is rank {len(x.shape)}",
)
x_shape = x.shape[axis]
else:
x_shape = tf.reduce_prod(x.shape)
bound_check = (indices < -x_shape) | (indices >= x_shape)
if tf.reduce_any(bound_check):
if len(indices.shape) == 0:
raise tf.errors.InvalidArgumentError(
None, None, f"index {indices} is not in [-{x_shape}, {x_shape})"
)
else:
first_non_zero = tuple(
map(
lambda n: n[0].numpy(),
tf.experimental.numpy.nonzero(bound_check),
)
)
raise tf.errors.InvalidArgumentError(
None,
None,
f"indices{list(first_non_zero)} = {indices[first_non_zero]} "
f"is not in [-{x_shape}, {x_shape})",
)
# clip, wrap
if mode != "fill":
ret = tf.experimental.numpy.take(x, indices, axis=axis, mode=mode)
if ivy.exists(out):
ivy.inplace_update(out, ret)
return ret
# fill
x_dtype = x.dtype
if fill_value is None:
# set according to jax behaviour
# https://tinyurl.com/66jn68uj
if x_dtype.is_floating or x_dtype.is_complex:
# NaN for inexact types
fill_value = float("NaN")
else:
if x_dtype == tf.bool:
# True for booleans
fill_value = True
elif x_dtype.is_unsigned:
# the largest positive value for unsigned types
fill_value = x_dtype.max
else:
# the largest negative value for signed types
fill_value = x_dtype.min
fill_value = tf.constant(fill_value, dtype=x_dtype)
x_shape = x.shape
ret = tf.experimental.numpy.take(x, indices, axis=axis, mode="wrap")
if len(ret.shape) == 0:
# if scalar, scalar fill (replace)
if tf.reduce_any(indices != 0):
ret = fill_value
else:
rank = len(x.shape)
if ivy.exists(axis):
axis = ((axis % rank) + rank) % rank
x_shape = x_shape[axis]
else:
axis = 0
x_shape = tf.reduce_prod(x_shape)
bound_check = tf.constant((indices < -x_shape) | (indices >= x_shape))
if tf.reduce_any(bound_check):
if axis > 0:
bound_check = tf.broadcast_to(
bound_check, (*x.shape[:axis], *bound_check.shape)
)
end_dim = x.shape[-((rank - axis) - 1) :]
else:
end_dim = x.shape[-(rank - 1) :]
if bound_check.shape != ret.shape:
slicer = list([Ellipsis] + ([None] * len(end_dim)))
bound_check = tf.broadcast_to(bound_check[slicer], ret.shape)
ret = tf.where(bound_check, fill_value[None], ret)
if ivy.exists(out):
ivy.inplace_update(out, ret)
return ret
def trim_zeros(a: tf.Tensor, /, *, trim: Optional[str] = "bf") -> tf.Tensor:
nonzero_indices = tf.where(a != 0)
first = tf.reduce_min(nonzero_indices)
last = tf.reduce_max(nonzero_indices) + 1
trim = trim.upper()
if "F" in trim:
first = tf.maximum(first, 0)
if "B" in trim:
last = tf.minimum(last, tf.cast(tf.shape(a)[0], tf.int64))
return a[first:last]
@handle_out_argument
def unflatten(
x: tf.Tensor,
/,
shape: Tuple[int] = None,
dim: Optional[int] = 0,
*,
out: Optional[tf.Tensor] = None,
name: Optional[str] = None,
) -> tf.Tensor:
dim = abs(len(x.shape) + dim) if dim < 0 else dim
res_shape = x.shape[:dim] + tf.TensorShape(shape) + x.shape[dim + 1 :]
res = tf.reshape(x, res_shape, name)
return res
| ivy/ivy/functional/backends/tensorflow/experimental/manipulation.py/0 | {
"file_path": "ivy/ivy/functional/backends/tensorflow/experimental/manipulation.py",
"repo_id": "ivy",
"token_count": 8667
} | 23 |
# global
from numbers import Number
from typing import Optional, Union, Tuple
import tensorflow as tf
import ivy
from ivy.func_wrapper import with_unsupported_dtypes
from . import backend_version
# Array API Standard #
# ------------------ #
@with_unsupported_dtypes({"2.15.0 and below": ("complex",)}, backend_version)
def argmax(
x: Union[tf.Tensor, tf.Variable],
/,
*,
axis: Optional[int] = None,
keepdims: bool = False,
dtype: Optional[Union[ivy.Dtype, ivy.NativeDtype]] = None,
select_last_index: bool = False,
out: Optional[Union[tf.Tensor, tf.Variable]] = None,
) -> Union[tf.Tensor, tf.Variable]:
n_dims = tf.rank(x).numpy()
if axis is None:
x = tf.reshape(x, [-1])
if select_last_index:
x = tf.experimental.numpy.flip(x, axis=axis)
ret = tf.argmax(x, axis=axis)
if axis is not None:
ret = x.shape[axis] - ret - 1
else:
ret = tf.size(x, out_type=tf.int64) - ret - 1
else:
ret = tf.argmax(x, axis=axis)
if keepdims:
if axis is None:
ret = tf.reshape(ret, [1] * n_dims)
else:
ret = tf.expand_dims(ret, axis)
return tf.cast(ret, dtype) if dtype is not None else ret
@with_unsupported_dtypes({"2.15.0 and below": ("complex",)}, backend_version)
def argmin(
x: Union[tf.Tensor, tf.Variable],
/,
*,
axis: Optional[int] = None,
keepdims: bool = False,
dtype: Optional[tf.dtypes.DType] = None,
select_last_index: bool = False,
out: Optional[Union[tf.Tensor, tf.Variable]] = None,
) -> Union[tf.Tensor, tf.Variable]:
n_dims = tf.rank(x).numpy()
if axis is None:
x = tf.reshape(x, [-1])
if select_last_index:
x = tf.experimental.numpy.flip(x, axis=axis)
ret = tf.argmin(x, axis=axis)
if axis is not None:
ret = x.shape[axis] - ret - 1
else:
ret = tf.size(x, out_type=tf.int64) - ret - 1
else:
ret = tf.argmin(x, axis=axis)
if keepdims:
if axis is None:
ret = tf.reshape(ret, [1] * n_dims)
else:
ret = tf.expand_dims(ret, axis)
return tf.cast(ret, dtype) if dtype is not None else ret
def nonzero(
x: Union[tf.Tensor, tf.Variable],
/,
*,
as_tuple: bool = True,
size: Optional[int] = None,
fill_value: Number = 0,
) -> Union[tf.Tensor, tf.Variable, Tuple[Union[tf.Tensor, tf.Variable]]]:
res = tf.experimental.numpy.nonzero(x)
if size is not None:
dtype = tf.int64
if isinstance(fill_value, float):
dtype = tf.float64
res = tf.cast(res, dtype)
diff = size - res[0].shape[0]
if diff > 0:
res = tf.pad(res, [[0, 0], [0, diff]], constant_values=fill_value)
elif diff < 0:
res = tf.slice(res, [0, 0], [-1, size])
if as_tuple:
return tuple(res)
return tf.stack(res, axis=1)
def where(
condition: Union[tf.Tensor, tf.Variable],
x1: Union[tf.Tensor, tf.Variable],
x2: Union[tf.Tensor, tf.Variable],
/,
*,
out: Optional[Union[tf.Tensor, tf.Variable]] = None,
) -> Union[tf.Tensor, tf.Variable]:
x1, x2 = ivy.promote_types_of_inputs(x1, x2)
return tf.cast(tf.experimental.numpy.where(condition, x1, x2), x1.dtype)
# Extra #
# ----- #
def argwhere(
x: Union[tf.Tensor, tf.Variable],
/,
*,
out: Optional[Union[tf.Tensor, tf.Variable]] = None,
) -> Union[tf.Tensor, tf.Variable]:
if isinstance(x, tf.Variable):
x_ndim = x.shape.rank
else:
x_ndim = x.ndim
if x_ndim == 0:
return tf.zeros(shape=[int(bool(x)), 0], dtype="int64")
where_x = tf.experimental.numpy.nonzero(x)
res = tf.experimental.numpy.concatenate(
[tf.expand_dims(item, -1) for item in where_x], -1
)
return res
| ivy/ivy/functional/backends/tensorflow/searching.py/0 | {
"file_path": "ivy/ivy/functional/backends/tensorflow/searching.py",
"repo_id": "ivy",
"token_count": 1820
} | 24 |
# global
from typing import Union, Optional
from math import pi
import torch
# local
import ivy
from ivy.func_wrapper import (
with_unsupported_dtypes,
with_supported_dtypes,
handle_numpy_arrays_in_specific_backend,
)
from ivy import promote_types_of_inputs
from . import backend_version
def _cast_for_unary_op(x):
if not isinstance(x, torch.Tensor):
x = torch.tensor(x)
return x
@handle_numpy_arrays_in_specific_backend
def add(
x1: Union[float, torch.Tensor],
x2: Union[float, torch.Tensor],
/,
*,
alpha: Optional[Union[int, float]] = None,
out: Optional[torch.Tensor] = None,
) -> torch.Tensor:
x1, x2 = ivy.promote_types_of_inputs(x1, x2)
if alpha not in (1, None):
return torch.add(x1, x2, alpha=alpha, out=out)
return torch.add(x1, x2, out=out)
add.support_native_out = True
@with_unsupported_dtypes({"2.2 and below": ("complex",)}, backend_version)
@handle_numpy_arrays_in_specific_backend
def bitwise_xor(
x1: Union[int, bool, torch.Tensor],
x2: Union[int, bool, torch.Tensor],
/,
*,
out: Optional[torch.Tensor] = None,
) -> torch.Tensor:
x1, x2 = ivy.promote_types_of_inputs(x1, x2, array_api_promotion=True)
return torch.bitwise_xor(x1, x2, out=out)
bitwise_xor.support_native_out = True
@with_supported_dtypes({"2.2 and below": ("complex",)}, backend_version)
def imag(
val: torch.Tensor,
/,
*,
out: Optional[torch.Tensor] = None,
) -> torch.Tensor:
return torch.imag(val)
imag.support_native_out = False
@with_unsupported_dtypes({"2.2 and below": ("float16", "complex")}, backend_version)
@handle_numpy_arrays_in_specific_backend
def expm1(x: torch.Tensor, /, *, out: Optional[torch.Tensor] = None) -> torch.Tensor:
x = _cast_for_unary_op(x)
return torch.expm1(x, out=out)
expm1.support_native_out = True
@with_unsupported_dtypes({"2.2 and below": ("complex",)}, backend_version)
@handle_numpy_arrays_in_specific_backend
def bitwise_invert(
x: Union[int, bool, torch.Tensor], /, *, out: Optional[torch.Tensor] = None
) -> torch.Tensor:
x = _cast_for_unary_op(x)
return torch.bitwise_not(x, out=out)
bitwise_invert.support_native_out = True
@handle_numpy_arrays_in_specific_backend
def isfinite(x: torch.Tensor, /, *, out: Optional[torch.Tensor] = None) -> torch.Tensor:
x = _cast_for_unary_op(x)
return torch.isfinite(x)
@with_unsupported_dtypes({"2.2 and below": ("complex",)}, backend_version)
@handle_numpy_arrays_in_specific_backend
def isinf(
x: torch.Tensor,
/,
*,
detect_positive: bool = True,
detect_negative: bool = True,
out: Optional[torch.Tensor] = None,
) -> torch.Tensor:
x = _cast_for_unary_op(x)
if detect_negative and detect_positive:
return torch.isinf(x)
elif detect_negative:
return torch.isneginf(x)
elif detect_positive:
return torch.isposinf(x)
return torch.full_like(x, False, dtype=torch.bool)
@handle_numpy_arrays_in_specific_backend
def equal(
x1: Union[float, torch.Tensor],
x2: Union[float, torch.Tensor],
/,
*,
out: Optional[torch.Tensor] = None,
) -> torch.Tensor:
x1, x2 = ivy.promote_types_of_inputs(x1, x2)
return torch.eq(x1, x2, out=out)
equal.support_native_out = True
@with_unsupported_dtypes({"2.2 and below": ("complex",)}, backend_version)
@handle_numpy_arrays_in_specific_backend
def less_equal(
x1: Union[float, torch.Tensor],
x2: Union[float, torch.Tensor],
/,
*,
out: Optional[torch.Tensor] = None,
) -> torch.Tensor:
x1, x2 = ivy.promote_types_of_inputs(x1, x2)
return torch.less_equal(x1, x2, out=out)
less_equal.support_native_out = True
@with_unsupported_dtypes({"2.2 and below": ("complex",)}, backend_version)
@handle_numpy_arrays_in_specific_backend
def bitwise_and(
x1: Union[int, bool, torch.Tensor],
x2: Union[int, bool, torch.Tensor],
/,
*,
out: Optional[torch.Tensor] = None,
) -> torch.Tensor:
x1, x2 = ivy.promote_types_of_inputs(x1, x2, array_api_promotion=True)
return torch.bitwise_and(x1, x2, out=out)
bitwise_and.support_native_out = True
@with_unsupported_dtypes({"2.2 and below": ("float16", "complex")}, backend_version)
@handle_numpy_arrays_in_specific_backend
def ceil(x: torch.Tensor, /, *, out: Optional[torch.Tensor] = None) -> torch.Tensor:
x = _cast_for_unary_op(x)
if "int" in str(x.dtype):
if ivy.exists(out):
return ivy.inplace_update(out, x)
return x
return torch.ceil(x, out=out)
ceil.support_native_out = True
@with_unsupported_dtypes({"2.2 and below": ("float16", "complex")}, backend_version)
@handle_numpy_arrays_in_specific_backend
def floor(x: torch.Tensor, /, *, out: Optional[torch.Tensor] = None) -> torch.Tensor:
x = _cast_for_unary_op(x)
if "int" in str(x.dtype):
if ivy.exists(out):
return ivy.inplace_update(out, x)
return x
return torch.floor(x, out=out)
floor.support_native_out = True
@with_unsupported_dtypes({"2.2 and below": ("complex",)}, backend_version)
def fmin(
x1: torch.Tensor,
x2: torch.Tensor,
/,
*,
out: Optional[torch.Tensor] = None,
) -> torch.Tensor:
return torch.fmin(x1, x2, out=None)
fmin.support_native_out = True
@with_unsupported_dtypes({"2.2 and below": ("complex",)}, backend_version)
@handle_numpy_arrays_in_specific_backend
def asin(x: torch.Tensor, /, *, out: Optional[torch.Tensor] = None) -> torch.Tensor:
x = _cast_for_unary_op(x)
return torch.asin(x, out=out)
asin.support_native_out = True
@with_unsupported_dtypes({"2.2 and below": ("float16",)}, backend_version)
@handle_numpy_arrays_in_specific_backend
def asinh(x: torch.Tensor, /, *, out: Optional[torch.Tensor] = None) -> torch.Tensor:
x = _cast_for_unary_op(x)
return torch.asinh(x, out=out)
asinh.support_native_out = True
@with_unsupported_dtypes({"2.2 and below": ("float16",)}, backend_version)
@handle_numpy_arrays_in_specific_backend
def sign(
x: torch.Tensor,
/,
*,
np_variant: Optional[bool] = True,
out: Optional[torch.Tensor] = None,
) -> torch.Tensor:
x = _cast_for_unary_op(x)
if "complex" in str(x.dtype):
if np_variant:
return torch.where(
x.real != 0, torch.sign(x.real) + 0.0j, torch.sign(x.imag) + 0.0j
)
return torch.sgn(x, out=out)
return torch.sign(x, out=out)
sign.support_native_out = True
@with_unsupported_dtypes({"2.2 and below": ("float16",)}, backend_version)
@handle_numpy_arrays_in_specific_backend
def sqrt(x: torch.Tensor, /, *, out: Optional[torch.Tensor] = None) -> torch.Tensor:
x = _cast_for_unary_op(x)
return torch.sqrt(x, out=out)
sqrt.support_native_out = True
@with_unsupported_dtypes({"2.2 and below": ("float16",)}, backend_version)
@handle_numpy_arrays_in_specific_backend
def cosh(x: torch.Tensor, /, *, out: Optional[torch.Tensor] = None) -> torch.Tensor:
x = _cast_for_unary_op(x)
return torch.cosh(x, out=out)
cosh.support_native_out = True
@with_unsupported_dtypes({"2.2 and below": ("float16",)}, backend_version)
@handle_numpy_arrays_in_specific_backend
def log10(x: torch.Tensor, /, *, out: Optional[torch.Tensor] = None) -> torch.Tensor:
x = _cast_for_unary_op(x)
return torch.log10(x, out=out)
log10.support_native_out = True
@with_unsupported_dtypes({"2.2 and below": ("float16",)}, backend_version)
@handle_numpy_arrays_in_specific_backend
def log2(x: torch.Tensor, /, *, out: Optional[torch.Tensor] = None) -> torch.Tensor:
x = _cast_for_unary_op(x)
return torch.log2(x, out=out)
@with_unsupported_dtypes({"2.2 and below": ("float16", "complex")}, backend_version)
@handle_numpy_arrays_in_specific_backend
def log1p(x: torch.Tensor, /, *, out: Optional[torch.Tensor] = None) -> torch.Tensor:
x = _cast_for_unary_op(x)
return torch.log1p(x, out=out)
log1p.support_native_out = True
@handle_numpy_arrays_in_specific_backend
def isnan(x: torch.Tensor, /, *, out: Optional[torch.Tensor] = None) -> torch.Tensor:
x = _cast_for_unary_op(x)
return torch.isnan(x)
@with_unsupported_dtypes({"2.2 and below": ("complex",)}, backend_version)
@handle_numpy_arrays_in_specific_backend
def less(
x1: Union[float, torch.Tensor],
x2: Union[float, torch.Tensor],
/,
*,
out: Optional[torch.Tensor] = None,
) -> torch.Tensor:
x1, x2 = ivy.promote_types_of_inputs(x1, x2)
return torch.lt(x1, x2, out=out)
less.support_native_out = True
@handle_numpy_arrays_in_specific_backend
def multiply(
x1: Union[float, torch.Tensor],
x2: Union[float, torch.Tensor],
/,
*,
out: Optional[torch.Tensor] = None,
) -> torch.Tensor:
x1, x2 = ivy.promote_types_of_inputs(x1, x2)
return torch.multiply(x1, x2, out=out)
multiply.support_native_out = True
@with_unsupported_dtypes({"2.2 and below": ("float16",)}, backend_version)
@handle_numpy_arrays_in_specific_backend
def cos(x: torch.Tensor, /, *, out: Optional[torch.Tensor] = None) -> torch.Tensor:
x = _cast_for_unary_op(x)
return torch.cos(x, out=out)
cos.support_native_out = True
@handle_numpy_arrays_in_specific_backend
def logical_not(
x: torch.Tensor, /, *, out: Optional[torch.Tensor] = None
) -> torch.Tensor:
x = _cast_for_unary_op(x)
return torch.logical_not(x.type(torch.bool), out=out)
logical_not.support_native_out = True
@handle_numpy_arrays_in_specific_backend
def divide(
x1: Union[float, torch.Tensor],
x2: Union[float, torch.Tensor],
/,
*,
out: Optional[torch.Tensor] = None,
) -> torch.Tensor:
x1, x2 = ivy.promote_types_of_inputs(x1, x2)
ret = torch.div(x1, x2)
if ivy.is_float_dtype(x1.dtype) or ivy.is_complex_dtype(x1.dtype):
ret = ivy.astype(ret, x1.dtype, copy=False)
else:
ret = ivy.astype(ret, ivy.default_float_dtype(as_native=True), copy=False)
return ret
divide.support_native_out = True
@with_unsupported_dtypes({"2.2 and below": ("complex",)}, backend_version)
@handle_numpy_arrays_in_specific_backend
def greater(
x1: Union[float, torch.Tensor],
x2: Union[float, torch.Tensor],
/,
*,
out: Optional[torch.Tensor] = None,
) -> torch.Tensor:
x1, x2 = ivy.promote_types_of_inputs(x1, x2)
return torch.greater(x1, x2, out=out)
greater.support_native_out = True
@with_unsupported_dtypes({"2.2 and below": ("complex",)}, backend_version)
@handle_numpy_arrays_in_specific_backend
def greater_equal(
x1: Union[float, torch.Tensor],
x2: Union[float, torch.Tensor],
/,
*,
out: Optional[torch.Tensor] = None,
) -> torch.Tensor:
x1, x2 = ivy.promote_types_of_inputs(x1, x2)
return torch.greater_equal(x1, x2, out=out)
greater_equal.support_native_out = True
@with_unsupported_dtypes({"2.2 and below": ("complex",)}, backend_version)
@handle_numpy_arrays_in_specific_backend
def acos(x: torch.Tensor, /, *, out: Optional[torch.Tensor] = None) -> torch.Tensor:
x = _cast_for_unary_op(x)
return torch.acos(x, out=out)
acos.support_native_out = True
@with_unsupported_dtypes({"2.2 and below": ("float",)}, backend_version)
@handle_numpy_arrays_in_specific_backend
def lcm(
x1: torch.Tensor,
x2: torch.Tensor,
/,
*,
out: Optional[torch.Tensor] = None,
) -> torch.Tensor:
x1, x2 = promote_types_of_inputs(x1, x2)
return torch.lcm(x1, x2, out=out)
lcm.support_native_out = True
@handle_numpy_arrays_in_specific_backend
def logical_xor(
x1: torch.Tensor, x2: torch.Tensor, /, *, out: Optional[torch.Tensor] = None
) -> torch.Tensor:
return torch.logical_xor(x1.type(torch.bool), x2.type(torch.bool), out=out)
logical_xor.support_native_out = True
@handle_numpy_arrays_in_specific_backend
def logical_and(
x1: torch.Tensor, x2: torch.Tensor, /, *, out: Optional[torch.Tensor] = None
) -> torch.Tensor:
return torch.logical_and(x1.type(torch.bool), x2.type(torch.bool), out=out)
logical_and.support_native_out = True
@handle_numpy_arrays_in_specific_backend
def logical_or(
x1: torch.Tensor, x2: torch.Tensor, /, *, out: Optional[torch.Tensor] = None
) -> torch.Tensor:
return torch.logical_or(x1.type(torch.bool), x2.type(torch.bool), out=out)
logical_or.support_native_out = True
@with_unsupported_dtypes({"2.2 and below": ("float16",)}, backend_version)
@handle_numpy_arrays_in_specific_backend
def acosh(x: torch.Tensor, /, *, out: Optional[torch.Tensor] = None) -> torch.Tensor:
x = _cast_for_unary_op(x)
return torch.acosh(x, out=out)
acosh.support_native_out = True
@with_unsupported_dtypes({"2.2 and below": ("float16",)}, backend_version)
@handle_numpy_arrays_in_specific_backend
def sin(x: torch.Tensor, /, *, out: Optional[torch.Tensor] = None) -> torch.Tensor:
x = _cast_for_unary_op(x)
return torch.sin(x, out=out)
sin.support_native_out = True
@handle_numpy_arrays_in_specific_backend
def negative(
x: Union[float, torch.Tensor], /, *, out: Optional[torch.Tensor] = None
) -> torch.Tensor:
x = _cast_for_unary_op(x)
return torch.neg(x, out=out)
negative.support_native_out = True
@handle_numpy_arrays_in_specific_backend
def not_equal(
x1: Union[float, torch.Tensor],
x2: Union[float, torch.Tensor],
/,
*,
out: Optional[torch.Tensor] = None,
) -> torch.Tensor:
x1, x2 = ivy.promote_types_of_inputs(x1, x2)
return torch.not_equal(x1, x2, out=out)
not_equal.support_native_out = True
@with_unsupported_dtypes({"2.2 and below": ("float16",)}, backend_version)
@handle_numpy_arrays_in_specific_backend
def tanh(
x: torch.Tensor, /, *, complex_mode="jax", out: Optional[torch.Tensor] = None
) -> torch.Tensor:
x = _cast_for_unary_op(x)
return torch.tanh(x, out=out)
tanh.support_native_out = True
@with_unsupported_dtypes({"2.2 and below": ("float16", "complex")}, backend_version)
@handle_numpy_arrays_in_specific_backend
def floor_divide(
x1: Union[float, torch.Tensor],
x2: Union[float, torch.Tensor],
/,
*,
out: Optional[torch.Tensor] = None,
) -> torch.Tensor:
x1, x2 = ivy.promote_types_of_inputs(x1, x2)
if ivy.exists(out):
if not ivy.is_float_dtype(out):
return ivy.inplace_update(
out, torch.floor(torch.div(x1, x2)).type(out.dtype)
)
return torch.floor(torch.div(x1, x2), out=out).type(x1.dtype)
floor_divide.support_native_out = True
@with_unsupported_dtypes({"2.2 and below": ("complex",)}, backend_version)
@handle_numpy_arrays_in_specific_backend
def bitwise_or(
x1: Union[int, bool, torch.Tensor],
x2: Union[int, bool, torch.Tensor],
/,
*,
out: Optional[torch.Tensor] = None,
) -> torch.Tensor:
x1, x2 = ivy.promote_types_of_inputs(x1, x2, array_api_promotion=True)
return torch.bitwise_or(x1, x2, out=out)
bitwise_or.support_native_out = True
@with_unsupported_dtypes({"2.2 and below": ("float16",)}, backend_version)
@handle_numpy_arrays_in_specific_backend
def sinh(x: torch.Tensor, /, *, out: Optional[torch.Tensor] = None) -> torch.Tensor:
x = _cast_for_unary_op(x)
return torch.sinh(x, out=out)
sinh.support_native_out = True
@handle_numpy_arrays_in_specific_backend
def positive(
x: Union[float, torch.Tensor], /, *, out: Optional[torch.Tensor] = None
) -> torch.Tensor:
x = _cast_for_unary_op(x)
return torch.positive(x)
@handle_numpy_arrays_in_specific_backend
def square(x: torch.Tensor, /, *, out: Optional[torch.Tensor] = None) -> torch.Tensor:
x = _cast_for_unary_op(x)
return torch.square(x, out=out)
square.support_native_out = True
@handle_numpy_arrays_in_specific_backend
def pow(
x1: torch.Tensor,
x2: Union[int, float, torch.Tensor],
/,
*,
out: Optional[torch.Tensor] = None,
) -> torch.Tensor:
if ivy.is_complex_dtype(x1) and ivy.any(ivy.isinf(x2)):
ret = torch.pow(x1, x2)
x2 = torch.as_tensor(x2).to(torch.float64)
return torch.where(
ivy.isinf(x2), torch.nan + torch.nan * 1j if x2 < 0 else -0 * 1j, ret
)
x1, x2 = ivy.promote_types_of_inputs(x1, x2)
if ivy.any(x1 == 0):
if ivy.is_complex_dtype(x2):
x2 = torch.broadcast_to(x2, x1.shape)
ret = torch.pow(x1, x2)
return torch.where(x1 == 0, torch.nan + torch.nan * 1j, ret)
elif (
ivy.any(x2 < 0)
and ivy.is_int_dtype(x2)
and all(dtype not in str(x1.dtype) for dtype in ["int16", "int8"])
):
if ivy.is_int_dtype(x1):
fill_value = torch.iinfo(x1.dtype).min
else:
fill_value = torch.finfo(x1.dtype).min
x2 = torch.broadcast_to(x2, x1.shape)
ret = torch.pow(x1, x2)
return torch.where(torch.bitwise_and(x1 == 0, x2 < 0), fill_value, ret)
return torch.pow(x1, x2, out=out)
pow.support_native_out = True
@with_unsupported_dtypes({"2.2 and below": ("float16", "complex")}, backend_version)
@handle_numpy_arrays_in_specific_backend
def round(
x: torch.Tensor, /, *, decimals: int = 0, out: Optional[torch.Tensor] = None
) -> torch.Tensor:
if "int" in str(x.dtype):
if ivy.exists(out):
return ivy.inplace_update(out, x)
return x
return torch.round(x, decimals=decimals, out=out)
round.support_native_out = True
def trapz(
y: torch.Tensor,
/,
*,
x: Optional[torch.Tensor] = None,
dx: Optional[float] = None,
axis: int = -1,
out: Optional[torch.Tensor] = None,
) -> torch.Tensor:
if x is None:
dx = dx if dx is not None else 1
return torch.trapezoid(y, dx=dx, dim=axis)
else:
if dx is not None:
raise TypeError(
"trapezoid() received an invalid combination of arguments - got "
"(Tensor, Tensor, int), but expected one of: *(Tensor "
"y, Tensor x, *, int dim) * (Tensor y, *, Number dx, int dim)"
)
else:
return torch.trapezoid(y, x=x, dim=axis)
trapz.support_native_out = False
@with_unsupported_dtypes({"2.2 and below": ("float16", "complex")}, backend_version)
@handle_numpy_arrays_in_specific_backend
def trunc(x: torch.Tensor, /, *, out: Optional[torch.Tensor] = None) -> torch.Tensor:
x = _cast_for_unary_op(x)
if "int" not in str(x.dtype):
return torch.trunc(x, out=out)
ret = x
if ivy.exists(out):
return ivy.inplace_update(out, ret)
return ret
trunc.support_native_out = True
@handle_numpy_arrays_in_specific_backend
def abs(
x: Union[float, torch.Tensor],
/,
*,
out: Optional[torch.Tensor] = None,
) -> torch.Tensor:
x = _cast_for_unary_op(x)
if x.dtype is torch.bool:
if ivy.exists(out):
return ivy.inplace_update(out, x)
return x
return torch.abs(x, out=out)
abs.support_native_out = True
@with_unsupported_dtypes({"2.2 and below": ("float16", "complex")}, backend_version)
@handle_numpy_arrays_in_specific_backend
def logaddexp(
x1: torch.Tensor, x2: torch.Tensor, /, *, out: Optional[torch.Tensor] = None
) -> torch.Tensor:
x1, x2 = ivy.promote_types_of_inputs(x1, x2)
return torch.logaddexp(x1, x2, out=out)
logaddexp.support_native_out = True
@with_unsupported_dtypes({"2.2 and below": ("float16",)}, backend_version)
@handle_numpy_arrays_in_specific_backend
def logaddexp2(
x1: Union[torch.Tensor, float, list, tuple],
x2: Union[torch.Tensor, float, list, tuple],
/,
*,
out: Optional[torch.Tensor] = None,
) -> torch.Tensor:
x1, x2 = promote_types_of_inputs(x1, x2)
if not ivy.is_float_dtype(x1):
x1 = x1.type(ivy.default_float_dtype(as_native=True))
x2 = x2.type(ivy.default_float_dtype(as_native=True))
return torch.logaddexp2(x1, x2, out=out)
logaddexp2.support_native_out = True
@with_unsupported_dtypes({"2.2 and below": ("float16", "bfloat16")}, backend_version)
@handle_numpy_arrays_in_specific_backend
def tan(x: torch.Tensor, /, *, out: Optional[torch.Tensor] = None) -> torch.Tensor:
x = _cast_for_unary_op(x)
return torch.tan(x, out=out)
tan.support_native_out = True
@with_unsupported_dtypes({"2.2 and below": ("float16",)}, backend_version)
@handle_numpy_arrays_in_specific_backend
def atan(x: torch.Tensor, /, *, out: Optional[torch.Tensor] = None) -> torch.Tensor:
x = _cast_for_unary_op(x)
return torch.atan(x, out=out)
atan.support_native_out = True
@with_unsupported_dtypes(
{"2.2 and below": ("float16", "bfloat16", "complex")}, backend_version
) # TODO Fixed in PyTorch 1.12.1 (this note excludes complex)
@handle_numpy_arrays_in_specific_backend
def atan2(
x1: torch.Tensor, x2: torch.Tensor, /, *, out: Optional[torch.Tensor] = None
) -> torch.Tensor:
x1, x2 = ivy.promote_types_of_inputs(x1, x2)
return torch.atan2(x1, x2, out=out)
atan2.support_native_out = True
@with_unsupported_dtypes({"2.2 and below": ("float16",)}, backend_version)
@handle_numpy_arrays_in_specific_backend
def log(x: torch.Tensor, /, *, out: Optional[torch.Tensor] = None) -> torch.Tensor:
x = _cast_for_unary_op(x)
return torch.log(x, out=out)
log.support_native_out = True
@with_unsupported_dtypes({"2.2 and below": ("float16",)}, backend_version)
@handle_numpy_arrays_in_specific_backend
def exp(x: torch.Tensor, /, *, out: Optional[torch.Tensor] = None) -> torch.Tensor:
x = _cast_for_unary_op(x)
return torch.exp(x, out=out)
exp.support_native_out = True
@handle_numpy_arrays_in_specific_backend
def exp2(
x: Union[torch.Tensor, float, list, tuple],
/,
*,
out: Optional[torch.Tensor] = None,
) -> torch.Tensor:
return torch.exp2(x, out=out)
exp2.support_native_out = True
@handle_numpy_arrays_in_specific_backend
def subtract(
x1: Union[float, torch.Tensor],
x2: Union[float, torch.Tensor],
/,
*,
alpha: Optional[Union[int, float]] = None,
out: Optional[torch.Tensor] = None,
) -> torch.Tensor:
x1, x2 = ivy.promote_types_of_inputs(x1, x2)
if alpha not in (1, None):
return torch.subtract(x1, x2, alpha=alpha, out=out)
return torch.subtract(x1, x2, out=out)
subtract.support_native_out = True
@with_unsupported_dtypes({"2.2 and below": ("float16", "complex")}, backend_version)
@handle_numpy_arrays_in_specific_backend
def remainder(
x1: Union[float, torch.Tensor],
x2: Union[float, torch.Tensor],
/,
*,
modulus: bool = True,
out: Optional[torch.Tensor] = None,
) -> torch.Tensor:
x1, x2 = ivy.promote_types_of_inputs(x1, x2)
if not modulus:
res = x1 / x2
res_floored = torch.where(res >= 0, torch.floor(res), torch.ceil(res))
diff = res - res_floored
diff, x2 = ivy.promote_types_of_inputs(diff, x2)
if ivy.exists(out):
if out.dtype != x2.dtype:
return ivy.inplace_update(
out, torch.round(torch.mul(diff, x2)).to(out.dtype)
)
return torch.round(torch.mul(diff, x2), out=out).to(x1.dtype)
return torch.remainder(x1, x2, out=out).to(x1.dtype)
remainder.support_native_out = True
@with_unsupported_dtypes({"2.2 and below": ("float16",)}, backend_version)
@handle_numpy_arrays_in_specific_backend
def atanh(x: torch.Tensor, /, *, out: Optional[torch.Tensor] = None) -> torch.Tensor:
x = _cast_for_unary_op(x)
return torch.atanh(x, out=out)
atanh.support_native_out = True
@with_unsupported_dtypes({"2.2 and below": ("complex",)}, backend_version)
@handle_numpy_arrays_in_specific_backend
def bitwise_right_shift(
x1: Union[int, bool, torch.Tensor],
x2: Union[int, bool, torch.Tensor],
/,
*,
out: Optional[torch.Tensor] = None,
) -> torch.Tensor:
x1, x2 = ivy.promote_types_of_inputs(x1, x2, array_api_promotion=True)
x2 = torch.clamp(x2, min=0, max=torch.iinfo(x2.dtype).bits - 1)
return torch.bitwise_right_shift(x1, x2, out=out)
bitwise_right_shift.support_native_out = True
@with_unsupported_dtypes({"2.2 and below": ("complex",)}, backend_version)
@handle_numpy_arrays_in_specific_backend
def bitwise_left_shift(
x1: Union[int, bool, torch.Tensor],
x2: Union[int, bool, torch.Tensor],
/,
*,
out: Optional[torch.Tensor] = None,
) -> torch.Tensor:
x1, x2 = ivy.promote_types_of_inputs(x1, x2, array_api_promotion=True)
return torch.bitwise_left_shift(x1, x2, out=out)
bitwise_left_shift.support_native_out = True
# Extra #
# ------#
@with_unsupported_dtypes({"2.2 and below": ("float16", "complex")}, backend_version)
@handle_numpy_arrays_in_specific_backend
def erf(x: torch.Tensor, /, *, out: Optional[torch.Tensor] = None) -> torch.Tensor:
x = _cast_for_unary_op(x)
return torch.erf(x, out=out)
erf.support_native_out = True
@with_unsupported_dtypes({"2.2 and below": ("complex",)}, backend_version)
@handle_numpy_arrays_in_specific_backend
def minimum(
x1: Union[float, torch.Tensor],
x2: Union[float, torch.Tensor],
/,
*,
use_where: bool = True,
out: Optional[torch.Tensor] = None,
) -> torch.Tensor:
x1, x2 = ivy.promote_types_of_inputs(x1, x2)
if use_where:
return torch.where(x1 <= x2, x1, x2, out=out)
return torch.minimum(x1, x2, out=out)
minimum.support_native_out = True
@with_unsupported_dtypes({"2.2 and below": ("complex",)}, backend_version)
@handle_numpy_arrays_in_specific_backend
def maximum(
x1: Union[float, torch.Tensor],
x2: Union[float, torch.Tensor],
/,
*,
use_where: bool = True,
out: Optional[torch.Tensor] = None,
) -> torch.Tensor:
x1, x2 = ivy.promote_types_of_inputs(x1, x2)
if use_where:
return torch.where(x1 >= x2, x1, x2, out=out)
return torch.maximum(x1, x2, out=out)
maximum.support_native_out = True
@with_unsupported_dtypes({"2.2 and below": ("float16",)}, backend_version)
@handle_numpy_arrays_in_specific_backend
def reciprocal(
x: Union[float, torch.Tensor], /, *, out: Optional[torch.Tensor] = None
) -> torch.Tensor:
x = _cast_for_unary_op(x)
return torch.reciprocal(x, out=out)
reciprocal.support_native_out = True
@with_unsupported_dtypes(
{"2.2 and below": ("complex64", "complex128")}, backend_version
)
@handle_numpy_arrays_in_specific_backend
def deg2rad(x: torch.Tensor, /, *, out: Optional[torch.Tensor] = None) -> torch.Tensor:
return torch.deg2rad(x, out=out)
deg2rad.support_native_out = True
@with_unsupported_dtypes(
{"2.2 and below": ("complex64", "complex128")}, backend_version
)
@handle_numpy_arrays_in_specific_backend
def rad2deg(x: torch.Tensor, /, *, out: Optional[torch.Tensor] = None) -> torch.Tensor:
return torch.rad2deg(x, out=out)
rad2deg.support_native_out = True
@with_unsupported_dtypes({"2.2 and below": ("complex",)}, backend_version)
@handle_numpy_arrays_in_specific_backend
def trunc_divide(
x1: Union[float, torch.Tensor],
x2: Union[float, torch.Tensor],
/,
*,
out: Optional[torch.Tensor] = None,
) -> torch.Tensor:
x1, x2 = ivy.promote_types_of_inputs(x1, x2)
ret = torch.div(x1, x2, rounding_mode="trunc")
if ivy.is_float_dtype(x1.dtype):
ret = ret.to(x1.dtype)
else:
ret = ret.to(ivy.default_float_dtype(as_native=True))
return ret
@handle_numpy_arrays_in_specific_backend
def isreal(x: torch.Tensor, /, *, out: Optional[torch.Tensor] = None) -> torch.Tensor:
return torch.isreal(x)
@with_unsupported_dtypes(
{"2.2 and below": ("bfloat16", "complex")},
backend_version,
)
@handle_numpy_arrays_in_specific_backend
def fmod(
x1: torch.Tensor,
x2: torch.Tensor,
/,
*,
out: Optional[torch.Tensor] = None,
) -> torch.Tensor:
x1, x2 = promote_types_of_inputs(x1, x2)
return torch.fmod(x1, x2, out=None)
fmod.support_native_out = True
def gcd(
x1: Union[torch.Tensor, int, list, tuple],
x2: Union[torch.Tensor, float, list, tuple],
/,
*,
out: Optional[torch.Tensor] = None,
) -> torch.Tensor:
x1, x2 = promote_types_of_inputs(x1, x2)
return torch.gcd(x1, x2, out=out)
gcd.support_native_out = True
def angle(
input: torch.Tensor,
/,
*,
deg: Optional[bool] = None,
out: Optional[torch.Tensor] = None,
) -> torch.Tensor:
if deg:
return torch.angle(input, out=out) * (180 / pi)
else:
return torch.angle(input, out=out)
angle.support_native_out = True
def nan_to_num(
x: torch.Tensor,
/,
*,
copy: bool = True,
nan: Union[float, int] = 0.0,
posinf: Optional[Union[float, int]] = None,
neginf: Optional[Union[float, int]] = None,
out: Optional[torch.Tensor] = None,
) -> torch.Tensor:
if copy:
return torch.nan_to_num(x, nan=nan, posinf=posinf, neginf=neginf, out=out)
else:
x = torch.nan_to_num(x, nan=nan, posinf=posinf, neginf=neginf)
return x
def real(x: torch.Tensor, /, *, out: Optional[torch.Tensor] = None) -> torch.Tensor:
return torch.real(x)
| ivy/ivy/functional/backends/torch/elementwise.py/0 | {
"file_path": "ivy/ivy/functional/backends/torch/elementwise.py",
"repo_id": "ivy",
"token_count": 12950
} | 25 |