DeepTensor
: A minimal PyTorch-like deep learning library focused on custom autograd and efficient tensor operations.
- Automatic gradient computation with a custom autograd engine.
- Weight initialization schemes:
Xavier/Glorot
andHe
initialization in bothuniform
andnormal
variants.
- Activation functions:
ReLU
,GeLU
,Sigmoid
,Tanh
,SoftMax
,LeakyReLU
, and more.
- Built-in loss functions:
Mean Squared Error (MSE)
,Cross Entropy
, andBinary Cross Entropy
.
- Optimizers:
SGD
,Momentum
,AdaGrad
,RMSprop
, andAdam
.
DeepTensor offers a hands-on implementation of deep learning fundamentals with a focus on customizability and learning the internals of deep learning frameworks like PyTorch.
pip install deeptensor
git clone --recurse-submodules -j8 [email protected]:deependujha/DeepTensor.git
cd DeepTensor
# run ctests
make ctest
# install python package in editable mode
pip install -e .
# run pytest
make test
from deeptensor import (
# model
Model,
# Layers
Conv2D,
MaxPooling2D,
Flatten,
LinearLayer,
# activation layers
GeLu,
LeakyReLu,
ReLu,
Sigmoid,
SoftMax,
Tanh,
# core objects
Tensor,
Value,
# optimizers
SGD,
Momentum,
AdaGrad,
RMSprop,
Adam,
# losses
mean_squared_error,
cross_entropy,
binary_cross_entropy,
)
model = Model(
[
LinearLayer(2, 16),
ReLu(),
LinearLayer(16, 16),
LeakyReLu(0.1),
LinearLayer(16, 1),
Sigmoid(),
],
False, # using_cuda
)
opt = Adam(model, 0.01) # learning rate
print(model)
tensor_input = Tensor([2])
tensor_input.set(0, Value(2.4))
tensor_input.set(1, Value(5.2))
out = model(tensor_input)
loss = mean_squared_error(out, YOUR_EXPECTED_OUTPUT)
# backprop
loss.backward()
opt.step()
opt.zero_grad()
- Save & Load model
- Train a character-level transformer model
- Add support for DDP
- Add support for CUDA execution ⭐️
I am actively seeking new opportunities to contribute to impactful projects in the deep learning and AI space.
If you are interested in collaborating or have a position that aligns with my expertise, feel free to reach out!
You can connect with me on GitHub, X (formerly twitter), or email me: [email protected]
.