Skip to content

Linear Regression Tool for Autodesk Maya

Mauro Lopez edited this page Jan 28, 2025 · 6 revisions

Welcome to the Linear Regression Tool! This tool allows you to train a Linear Regression model with Elastic Net regularization directly in Autodesk Maya and generate a custom node that embeds the trained model for real-time inference.

Features

  1. Attribute-Based Training:

    • Select any attributes in your Maya scene as inputs and targets.
    • The tool will train a model to predict the target attributes based on the input attributes.
  2. Random or Current Animation Data:

    • Use existing animation curves in your Maya scene.
    • Generate random animation data to expand your dataset.
  3. Elastic Net Regularization:

    • Combines L1 and L2 penalties during training.
    • Parameters like alpha, L1 ratio, and tolerance are configurable.
  4. Normalization:

    • Automatically normalizes input and output data.
    • Stores the mean and standard deviation in the custom node for consistent runtime normalization.
  5. Custom Node Creation:

    • Generates a lightweight, custom linear regression node.
    • The trained weights, bias, and normalization parameters are automatically set in the node.

What is Linear Regression?

Linear Regression is a simple and widely used machine learning algorithm for modeling the relationship between a dependent variable (target) and one or more independent variables (inputs). It works by fitting a linear equation to the observed data:

$y = w_1x_1 + w_2x_2 + \dots + w_nx_n + b$

Here:

  • $y$ : Predicted output (target attribute).
  • $x_1, x_2, \dots, x_n$ : Input attributes.
  • $w_1, w_2, \dots, w_n$ : Weights (learned parameters).
  • $b$ : Bias term (intercept).

The algorithm minimizes the error between predicted and actual values, usually measured using Mean Squared Error (MSE).


What is L1 and L2 Regularization?

Regularization is a technique used to prevent overfitting by adding a penalty to the model's complexity. Elastic Net combines two types of regularization:

L1 Regularization (Lasso):

L1 adds the absolute value of weights as a penalty term to the loss function:

$\text{Loss} = \text{MSE} + \alpha \sum |w|$

  • Encourages sparsity by setting some weights to zero.
  • Useful for feature selection.

L2 Regularization (Ridge):

L2 adds the square of the weights as a penalty term to the loss function:

$\text{Loss} = \text{MSE} + \alpha \sum w^2$

  • Encourages smaller weights but does not eliminate them.
  • Helps distribute the influence among features.

Elastic Net:

Elastic Net combines L1 and L2 regularization:

$\text{Loss} = \text{MSE} + \alpha \left( \text{L1 ratio} \times \sum |w| + (1 - \text{L1 ratio}) \times \sum w^2 \right)$

  • Balances feature selection (L1) and weight distribution (L2).
  • Controlled by the L1 Ratio parameter.

By using Elastic Net, this tool achieves a balance between simplicity (sparse weights) and stability (reduced overfitting).


Data Normalization

What is Normalization?

Normalization scales input data to have a mean of 0 and a standard deviation of 1. This process ensures that all features contribute equally to the training process, regardless of their original scale. The formula for normalization is:

$x_{\text{normalized}} = \frac{x - \mu}{\sigma}$

Here:

  • $x$ : Original data value.
  • $\mu$: Mean of the data.
  • $\sigma$: Standard deviation of the data.

For denormalizing the output, the process is reversed:

$x_{\text{original}} = x_{\text{normalized}} \cdot \sigma + \mu$

Why Normalize?

  1. Improved Convergence: Many optimization algorithms, like gradient descent, converge faster when features are on similar scales.
  2. Equal Contribution: Prevents features with larger scales from dominating the training process.
  3. Numerical Stability: Reduces the risk of numerical issues caused by very large or small feature values.

By storing the mean and standard deviation in the custom node, the tool ensures that data is consistently normalized during inference, leading to accurate predictions.


Installation

  1. Clone or download this repository.
  2. Copy the tool files to your Maya scripts directory.
  3. Open Maya and run the tool's main script to launch the UI.

User Interface (UI)

Input Attributes

  • Add attributes from your scene to serve as inputs.
  • Multiple attributes can be selected.

Target Attributes

  • Define the attributes you want to predict using the model.

Animation Data

  • Use Current Animation: Utilize existing animation curves in the scene.
  • Generate Random Animation: Automatically create randomized animation data.
    • Specify the number of frames and value range.

Training Parameters

  • Alpha: Controls the strength of regularization.
  • L1 Ratio: Determines the balance between L1 and L2 penalties.
  • Tolerance: Sets the convergence threshold.
  • Max Iterations: Specifies the maximum number of iterations for training.

Debugging Options

  • Enable output node duplication for debugging purposes.

Train Button

  • Start the training process.
  • Progress and results are displayed in the output window.

Workflow

  1. Launch the tool and open the UI.
  2. Select the input and target attributes.
  3. Choose whether to use current animation data or generate random data.
  4. Set the training parameters (optional).
  5. Click "Train" to train the model.
  6. A custom linear regression node is created and linked to your scene.

Node Details

The generated node contains:

  • Input connections: The input attributes are connected as source for the node features.
  • Output connections: The target attributes are connected as destination for the node output values.
  • Weights and Bias: Stored learned parameters from the trained model.
  • Normalization Parameters: Mean and standard deviation for input normalization and output denormalization.

Examples

Example 1: Predict Locator Position

  1. Select pSphere1.translateY and pSphere1.translateZ as input attributes.
  2. Select locator1.translateZ as the target attribute.
  3. Train the model and observe how the locator's Z-translation is predicted based on the sphere's translations.

Example 2: Randomized Training

  1. Generate random animation data with a range of -50 to 50.
  2. Train the model and apply it to custom attributes.

Troubleshooting

  • Model Not Converging: Increase Max Iterations or adjust Tolerance.
  • Overfitting: Increase the Alpha value or adjust the L1 Ratio for stronger regularization.

License

This tool is open-source and available under the MIT License. Contributions are welcome!


Contact

If you encounter issues or have suggestions, feel free to create an issue on GitHub or contact me directly on Blue Sky

Clone this wiki locally