Skip to content

DeepTensor: A minimal PyTorch-like deep learning library focused on custom autograd and efficient tensor operations.

License

Notifications You must be signed in to change notification settings

deependujha/DeepTensor

Repository files navigation

DeepTensor 🔥

mexican cat dance

  • DeepTensor: A minimal PyTorch-like deep learning library focused on custom autograd and efficient tensor operations.

Features at a Glance 🚀

  • Automatic gradient computation with a custom autograd engine.
  • Weight initialization schemes:
    • Xavier/Glorot and He initialization in both uniform and normal variants.
  • Activation functions:
    • ReLU, GeLU, Sigmoid, Tanh, SoftMax, LeakyReLU, and more.
  • Built-in loss functions:
    • Mean Squared Error (MSE), Cross Entropy, and Binary Cross Entropy.
  • Optimizers:
    • SGD, Momentum, AdaGrad, RMSprop, and Adam.

Why DeepTensor?

DeepTensor offers a hands-on implementation of deep learning fundamentals with a focus on customizability and learning the internals of deep learning frameworks like PyTorch.


Installation

pip install deeptensor

Setup the project for development

git clone --recurse-submodules -j8 [email protected]:deependujha/DeepTensor.git
cd DeepTensor

# run ctests
make ctest

# install python package in editable mode
pip install -e .

# run pytest
make test

Checkout Demo

demo


Check Docs

loss curve


Basic Usage

from deeptensor import (
    # model
    Model,

    # Layers
    Conv2D,
    MaxPooling2D,
    Flatten,
    LinearLayer,

    # activation layers
    GeLu,
    LeakyReLu,
    ReLu,
    Sigmoid,
    SoftMax,
    Tanh,

    # core objects
    Tensor,
    Value,

    # optimizers
    SGD,
    Momentum,
    AdaGrad,
    RMSprop,
    Adam,

    # losses
    mean_squared_error,
    cross_entropy,
    binary_cross_entropy,
)

model = Model(
    [
        LinearLayer(2, 16),
        ReLu(),
        LinearLayer(16, 16),
        LeakyReLu(0.1),
        LinearLayer(16, 1),
        Sigmoid(),
    ],
    False,  # using_cuda
)

opt = Adam(model, 0.01) # learning rate

print(model)

tensor_input = Tensor([2])
tensor_input.set(0, Value(2.4))
tensor_input.set(1, Value(5.2))

out = model(tensor_input)

loss = mean_squared_error(out, YOUR_EXPECTED_OUTPUT)

# backprop
loss.backward()
opt.step()
opt.zero_grad()

Features expected to be added

  • Save & Load model
  • Train a character-level transformer model
  • Add support for DDP
  • Add support for CUDA execution ⭐️

Open to Opportunities 🎅🏻🎁

I am actively seeking new opportunities to contribute to impactful projects in the deep learning and AI space.

If you are interested in collaborating or have a position that aligns with my expertise, feel free to reach out!

You can connect with me on GitHub, X (formerly twitter), or email me: [email protected].