diff --git a/docs/MO_DG/prepare_model/convert_model/Legacy_IR_Layers_Catalog_Spec.md b/docs/MO_DG/prepare_model/convert_model/Legacy_IR_Layers_Catalog_Spec.md index dc35ff0fba7271..abd997f36c3f2c 100644 --- a/docs/MO_DG/prepare_model/convert_model/Legacy_IR_Layers_Catalog_Spec.md +++ b/docs/MO_DG/prepare_model/convert_model/Legacy_IR_Layers_Catalog_Spec.md @@ -1582,9 +1582,9 @@ OI, which means that Input changes the fastest, then Output. **Mathematical Formulation** - \f[ - output[:, ... ,:, i, ... , j,:, ... ,:] = input2[:, ... ,:, input1[i, ... ,j],:, ... ,:] - \f] +\f[ + output[:, ... ,:, i, ... , j,:, ... ,:] = input2[:, ... ,:, input1[i, ... ,j],:, ... ,:] +\f] **Inputs** @@ -5086,7 +5086,9 @@ t \in \left ( 0, \quad tiles \right ) Output tensor is populated by values computes in the following way: - output[i1, ..., i(axis-1), j, i(axis+1) ..., iN] = top_k(input[i1, ...., i(axis-1), :, i(axis+1), ..., iN]), k, sort, mode) +\f[ +output[i1, ..., i(axis-1), j, i(axis+1) ..., iN] = top_k(input[i1, ...., i(axis-1), :, i(axis+1), ..., iN]), k, sort, mode) +\f] So for each slice `input[i1, ...., i(axis-1), :, i(axis+1), ..., iN]` which represents 1D array, top_k value is computed individually. Sorting and minimum/maximum are controlled by `sort` and `mode` attributes. diff --git a/docs/ops/activation/HSwish_4.md b/docs/ops/activation/HSwish_4.md index a2bf8407ea34ec..bf572c39f43f27 100644 --- a/docs/ops/activation/HSwish_4.md +++ b/docs/ops/activation/HSwish_4.md @@ -9,9 +9,9 @@ **Detailed description**: For each element from the input tensor calculates corresponding element in the output tensor with the following formula: - \f[ - HSwish(x) = x \frac{min(max(x + 3, 0), 6)}{6} - \f] +\f[ +HSwish(x) = x \frac{min(max(x + 3, 0), 6)}{6} +\f] The HSwish operation is introduced in the following [article](https://arxiv.org/pdf/1905.02244.pdf). diff --git a/docs/ops/activation/Mish_4.md b/docs/ops/activation/Mish_4.md index 6163131e11073f..8eda674f5039f4 100644 --- a/docs/ops/activation/Mish_4.md +++ b/docs/ops/activation/Mish_4.md @@ -26,9 +26,9 @@ For each element from the input tensor calculates corresponding element in the output tensor with the following formula: - \f[ - Mish(x) = x*tanh(ln(1.0+e^{x})) - \f] +\f[ +Mish(x) = x*tanh(ln(1.0+e^{x})) +\f] **Examples** diff --git a/docs/ops/activation/Sigmoid_1.md b/docs/ops/activation/Sigmoid_1.md index 17e012061f9c70..305bd81b1644de 100644 --- a/docs/ops/activation/Sigmoid_1.md +++ b/docs/ops/activation/Sigmoid_1.md @@ -14,9 +14,9 @@ For each element from the input tensor calculates corresponding element in the output tensor with the following formula: - \f[ - sigmoid( x ) = \frac{1}{1+e^{-x}} - \f] +\f[ +sigmoid( x ) = \frac{1}{1+e^{-x}} +\f] **Inputs**: diff --git a/docs/ops/activation/SoftPlus_4.md b/docs/ops/activation/SoftPlus_4.md index 112faa2873098e..135c4cb9dccae4 100644 --- a/docs/ops/activation/SoftPlus_4.md +++ b/docs/ops/activation/SoftPlus_4.md @@ -9,9 +9,9 @@ **Detailed description**: For each element from the input tensor calculates corresponding element in the output tensor with the following formula: - \f[ - SoftPlus(x) = ln(e^{x} + 1.0) - \f] +\f[ +SoftPlus(x) = ln(e^{x} + 1.0) +\f] **Attributes**: *SoftPlus* operation has no attributes. diff --git a/docs/ops/activation/Swish_4.md b/docs/ops/activation/Swish_4.md index e8a51c9dc048db..78bcb3866e7b91 100644 --- a/docs/ops/activation/Swish_4.md +++ b/docs/ops/activation/Swish_4.md @@ -9,9 +9,9 @@ **Detailed description**: For each element from the input tensor calculates corresponding element in the output tensor with the following formula: - \f[ - Swish(x) = x / (1.0 + e^{-(beta * x)}) - \f] +\f[ +Swish(x) = x / (1.0 + e^{-(beta * x)}) +\f] The Swish operation is introduced in the [article](https://arxiv.org/pdf/1710.05941.pdf). diff --git a/docs/ops/pooling/AvgPool_1.md b/docs/ops/pooling/AvgPool_1.md index dfa04c476b02ed..b8f0ecb2f31ff3 100644 --- a/docs/ops/pooling/AvgPool_1.md +++ b/docs/ops/pooling/AvgPool_1.md @@ -78,9 +78,9 @@ **Mathematical Formulation** - \f[ - output_{j} = \frac{\sum_{i = 0}^{n}x_{i}}{n} - \f] +\f[ +output_{j} = \frac{\sum_{i = 0}^{n}x_{i}}{n} +\f] **Example** diff --git a/docs/ops/pooling/MaxPool_1.md b/docs/ops/pooling/MaxPool_1.md index 6e705e49a22c8e..e730b7892ca6ba 100644 --- a/docs/ops/pooling/MaxPool_1.md +++ b/docs/ops/pooling/MaxPool_1.md @@ -70,9 +70,9 @@ **Mathematical Formulation** - \f[ - output_{j} = MAX\{ x_{0}, ... x_{i}\} - \f] +\f[ +output_{j} = MAX\{ x_{0}, ... x_{i}\} +\f] **Example**