Skip to content

Commit

Permalink
Fix broken references
Browse files Browse the repository at this point in the history
  • Loading branch information
ricardoV94 committed Jan 31, 2025
1 parent 4e55e0e commit 42e31c4
Show file tree
Hide file tree
Showing 7 changed files with 8 additions and 16 deletions.
2 changes: 1 addition & 1 deletion doc/core_development_guide.rst
Original file line number Diff line number Diff line change
Expand Up @@ -26,4 +26,4 @@ some of them might be outdated though:

* :ref:`unittest` -- Tutorial on how to use unittest in testing PyTensor.

* :ref:`sparse` -- Description of the ``sparse`` type in PyTensor.
* :ref:`libdoc_sparse` -- Description of the ``sparse`` type in PyTensor.
4 changes: 2 additions & 2 deletions doc/extending/creating_a_c_op.rst
Original file line number Diff line number Diff line change
Expand Up @@ -923,7 +923,7 @@ pre-defined macros. These section tags have no macros: ``init_code``,
discussed below.

* ``APPLY_SPECIFIC(str)`` which will automatically append a name
unique to the :ref:`Apply` node that applies the `Op` at the end
unique to the :ref:`apply` node that applies the `Op` at the end
of the provided ``str``. The use of this macro is discussed
further below.

Expand Down Expand Up @@ -994,7 +994,7 @@ Apply node in their own names to avoid conflicts between the different
versions of the apply-specific code. The code that wasn't
apply-specific was simply defined in the ``c_support_code`` method.

To make indentifiers that include the :ref:`Apply` node name use the
To make indentifiers that include the :ref:`apply` node name use the
``APPLY_SPECIFIC(str)`` macro. In the above example, this macro is
used when defining the functions ``vector_elemwise_mult`` and
``vector_times_vector`` as well as when calling function
Expand Down
2 changes: 1 addition & 1 deletion doc/extending/creating_an_op.rst
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ Creating a new :class:`Op`: Python implementation
So suppose you have looked through the library documentation and you don't see
a function that does what you want.

If you can implement something in terms of an existing :ref:`Op`, you should do that.
If you can implement something in terms of an existing :ref:`op`, you should do that.
Odds are your function that uses existing PyTensor expressions is short,
has no bugs, and potentially profits from rewrites that have already been
implemented.
Expand Down
2 changes: 1 addition & 1 deletion doc/extending/inplace.rst
Original file line number Diff line number Diff line change
Expand Up @@ -200,7 +200,7 @@ input(s)'s memory). From there, go to the previous section.
certainly lead to erroneous computations.

You can often identify an incorrect `Op.view_map` or :attr:`Op.destroy_map`
by using :ref:`DebugMode`.
by using :ref:`DebugMode <debugmode>`.

.. note::
Consider using :class:`DebugMode` when developing
Expand Down
2 changes: 1 addition & 1 deletion doc/extending/other_ops.rst
Original file line number Diff line number Diff line change
Expand Up @@ -197,7 +197,7 @@ Want C speed without writing C code for your new Op? You can use Numba
to generate the C code for you! Here is an `example
Op <https://gist.github.com/nouiz/5492778#file-theano_op-py>`_ doing that.

.. _alternate_PyTensor_types:
.. _alternate_pytensor_types:

Alternate PyTensor Types
========================
Expand Down
2 changes: 1 addition & 1 deletion doc/library/tensor/random/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -83,7 +83,7 @@ Low-level objects
.. automodule:: pytensor.tensor.random.op
:members: RandomVariable, default_rng

..automodule:: pytensor.tensor.random.type
.. automodule:: pytensor.tensor.random.type
:members: RandomType, RandomGeneratorType, random_generator_type

.. automodule:: pytensor.tensor.random.var
Expand Down
10 changes: 1 addition & 9 deletions doc/tutorial/examples.rst
Original file line number Diff line number Diff line change
Expand Up @@ -347,15 +347,7 @@ afterwards compile this expression to get functions,
using pseudo-random numbers is not as straightforward as it is in
NumPy, though also not too complicated.

The way to think about putting randomness into PyTensor's computations is
to put random variables in your graph. PyTensor will allocate a NumPy
`RandomStream` object (a random number generator) for each such
variable, and draw from it as necessary. We will call this sort of
sequence of random numbers a *random stream*. *Random streams* are at
their core shared variables, so the observations on shared variables
hold here as well. PyTensor's random objects are defined and implemented in
:ref:`RandomStream<libdoc_tensor_random_utils>` and, at a lower level,
in :ref:`RandomVariable<libdoc_tensor_random_basic>`.
The general user-facing API is documented in :ref:`RandomStream<libdoc_tensor_random_basic>`

For a more technical explanation of how PyTensor implements random variables see :ref:`prng`.

Expand Down

0 comments on commit 42e31c4

Please sign in to comment.