Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

0.1.2 #15

Merged
merged 23 commits into from
May 29, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
23 commits
Select commit Hold shift + click to select a range
6076416
add readthedocs badge
danibene Mar 15, 2024
2f8394a
add docs and pypi badges
danibene Mar 15, 2024
92bf196
have badge link to pypi
danibene Mar 15, 2024
e22fcc0
try testing with python 3.11 and all OSs
danibene Mar 15, 2024
77b2c26
only test all OSs
danibene Mar 15, 2024
3dfaad5
update version in python requires + tests
danibene Mar 15, 2024
792bc16
Update README.md
sibasmarak Mar 18, 2024
552146b
update newest python version to 3.12
danibene Mar 19, 2024
4c5e712
correct segmentation eval; transfer learning with pretrained canonica…
sibasmarak Apr 5, 2024
4b56bb8
Merge branch 'dev' of https://github.com/arnab39/EquivariantAdaptatio…
sibasmarak Apr 5, 2024
68be8c4
run "pre-commit run --all-files"
danibene Apr 11, 2024
59104ef
Merge branch 'main' into dev
danibene Apr 11, 2024
4d6f8f3
update contributor's guide with info on code checks
danibene Apr 11, 2024
bf53739
add PR template inspired by https://github.com/neuropsychology/NeuroK…
danibene Apr 11, 2024
5583bc7
update changelog with changes from this PR
danibene Apr 11, 2024
99c533f
add changes from https://github.com/arnab39/equiadapt/commit/f39eb87e…
danibene Apr 11, 2024
f3bec57
Merge pull request #19 from arnab39/add/pr_template
sibasmarak May 27, 2024
8bf1da3
Added EquiOptAdapt paper details
sibasmarak May 27, 2024
2d355ac
change os tested for python 3.7
danibene May 28, 2024
00b0e86
update changelog
danibene May 28, 2024
102f73f
use variable names from previous version of ci file
danibene May 28, 2024
b081106
Merge pull request #21 from arnab39/change/os_ci_test
sibasmarak May 28, 2024
2b0e434
Updated CHANGELOG.md
sibasmarak May 29, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
20 changes: 20 additions & 0 deletions .github/pull_request_template.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
This is a template for making a pull-request. You can remove the text and sections and write your own thing if you wish, just make sure you give enough information about how and why. If you have any issues or difficulties, don't hesitate to open an issue.


# Description

The aim is to add this feature ...

# Proposed Changes

I changed the `foo()` function so that ...


# Checklist

Here are some things to check before creating the pull request. If you encounter any issues, don't hesitate to ask for help :)

- [ ] I have read the [contributor's guide](https://github.com/arnab39/equiadapt/blob/main/CONTRIBUTING.md).
- [ ] The base branch of my pull request is the `dev` branch, not the `main` branch.
- [ ] I ran the [code checks](https://github.com/arnab39/equiadapt/blob/main/CONTRIBUTING.md#implement-your-changes) on the files I added or modified and fixed the errors.
- [ ] I updated the [changelog](https://github.com/arnab39/equiadapt/blob/main/CHANGELOG.md).
16 changes: 9 additions & 7 deletions .github/workflows/ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -58,14 +58,16 @@ jobs:
test:
needs: prepare
strategy:
fail-fast: false
matrix:
python:
- "3.7" # oldest Python supported by PSF
- "3.10" # newest Python that is stable
platform:
- ubuntu-latest
# - macos-latest
# - windows-latest
python: ["3.7", "3.8", "3.9", "3.10", "3.11", "3.12"]
platform: [ubuntu-latest, macos-latest, windows-latest]
exclude: # Python < v3.8 does not support Apple Silicon ARM64.
- python: "3.7"
platform: macos-latest
include: # So run those legacy versions on Intel CPUs.
- python: "3.7"
platform: macos-13
runs-on: ${{ matrix.platform }}
steps:
- uses: actions/checkout@v3
Expand Down
6 changes: 3 additions & 3 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@ exclude: '^docs/conf.py'

repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v4.5.0
rev: v4.6.0
hooks:
- id: trailing-whitespace
- id: check-added-large-files
Expand Down Expand Up @@ -40,7 +40,7 @@ repos:
- id: isort

- repo: https://github.com/psf/black
rev: 24.2.0
rev: 24.4.2
hooks:
- id: black
language_version: python3
Expand All @@ -66,7 +66,7 @@ repos:

# Check for type errors with mypy:
- repo: https://github.com/pre-commit/mirrors-mypy
rev: 'v1.9.0'
rev: 'v1.10.0'
hooks:
- id: mypy
args: [--disallow-untyped-defs, --ignore-missing-imports]
Expand Down
13 changes: 10 additions & 3 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,15 +5,22 @@ All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).

## [Unreleased]
## [0.1.2] - 2024-05-29

### Added
- Added canonicalization with optimization approach.
- Added evaluating transfer learning capabilities of canonicalizer.
- Added pull request template.
- Added test for discrete invert canonicalization.

### Fixed
- Fixed segmentation evaluation for non-identity canonicalizers.
- Fixed minor bugs in inverse canonicalization for discrete groups.

### Changed

### Removed
- Updated `README.md` with [Improved Canonicalization for Model Agnostic Equivariance](https://arxiv.org/abs/2405.14089) ([EquiVision](https://equivision.github.io/), CVPR 2024 workshop) paper details.
- Updated `CONTRIBUTING.md` with more information on how to run the code checks.
- Changed the OS used to test Python 3.7 on GitHub actions (macos-latest -> macos-13).

## [0.1.1] - 2024-03-15

Expand Down
26 changes: 19 additions & 7 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -155,17 +155,29 @@ This can easily be done via [Anaconda] or [Miniconda] and detailed [here](https:
`git log --graph --decorate --pretty=oneline --abbrev-commit --all`
to look for recurring communication patterns.

#### Run code checks

5. Please check that your changes don't break any unit tests with:
Please make sure to see the validation messages from pre-commit and fix any
eventual issues. This should automatically use [flake8]/[black] to check/fix
the code style in a way that is compatible with the project.

```
tox
```
To run pre-commit manually, you can use:

```
pre-commit run --all-files
```

Please also check that your changes don't break any unit tests with:

```
tox
```

(after having installed [tox] with `pip install tox` or `pipx`).

(after having installed [tox] with `pip install tox` or `pipx`).
You can also use [tox] to run several other pre-configured tasks in the
repository. Try `tox -av` to see a list of the available checks.

You can also use [tox] to run several other pre-configured tasks in the
repository. Try `tox -av` to see a list of the available checks.

### Submit your contribution

Expand Down
17 changes: 15 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -121,6 +121,8 @@ You can clone this repository and manually install it with:

## Setup Conda environment for examples

The recommended way is to manually create an environment and install the dependencies from the `min_conda_env.yaml` file.

To create a conda environment with the necessary packages:

```
Expand Down Expand Up @@ -168,7 +170,7 @@ You can also find [tutorials](https://github.com/arnab39/equiadapt/blob/main/tut

# Related papers and Citations

For more insights on this library refer to our original paper on the idea: [Equivariance with Learned Canonicalization Function (ICML 2023)](https://proceedings.mlr.press/v202/kaba23a.html) and how to extend it to make any existing large pre-trained model equivariant: [Equivariant Adaptation of Large Pretrained Models (NeurIPS 2023)](https://proceedings.neurips.cc/paper_files/paper/2023/hash/9d5856318032ef3630cb580f4e24f823-Abstract-Conference.html).
For more insights on this library refer to our original paper on the idea: [Equivariance with Learned Canonicalization Function (ICML 2023)](https://proceedings.mlr.press/v202/kaba23a.html) and how to extend it to make any existing large pre-trained model equivariant: [Equivariant Adaptation of Large Pretrained Models (NeurIPS 2023)](https://proceedings.neurips.cc/paper_files/paper/2023/hash/9d5856318032ef3630cb580f4e24f823-Abstract-Conference.html). An improved approach for designing canonicalization network, which allows non-equivariant and expressive models as equivariant networks is presented in [Improved Canonicalization for Model Agnostic Equivariance (CVPR 2024: EquiVision Workshop)](https://arxiv.org/abs/2405.14089).


If you find this library or the associated papers useful, please cite the following papers:
Expand Down Expand Up @@ -197,6 +199,17 @@ If you find this library or the associated papers useful, please cite the follow
}
```

```
@inproceedings{
panigrahi2024improved,
title={Improved Canonicalization for Model Agnostic Equivariance},
author={Siba Smarak Panigrahi and Arnab Kumar Mondal},
booktitle={CVPR 2024 Workshop on Equivariant Vision: From Theory to Practice},
year={2024},
url={https://arxiv.org/abs/2405.14089}
}
```

# Contact

For questions related to this code, please raise an issue and you can mail us at:
Expand All @@ -206,7 +219,7 @@ For questions related to this code, please raise an issue and you can mail us at

# Contributing

You can check out the [contributor's guide](https://github.com/arnab39/equiadapt/blob/main/CHANGELOG.md).
You can check out the [contributor's guide](https://github.com/arnab39/equiadapt/blob/main/CONTRIBUTING.md).

This project uses `pre-commit`, you can install it before making any
changes::
Expand Down
4 changes: 4 additions & 0 deletions equiadapt/images/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,8 @@
RotationEquivariantConvLift,
RotoReflectionEquivariantConv,
RotoReflectionEquivariantConvLift,
WideResNet50Network,
WideResNet101Network,
custom_equivariant_networks,
custom_group_equivariant_layers,
custom_nonequivariant_networks,
Expand Down Expand Up @@ -51,6 +53,8 @@
"OptimizedGroupEquivariantImageCanonicalization",
"OptimizedSteerableImageCanonicalization",
"ResNet18Network",
"WideResNet50Network",
"WideResNet101Network",
"RotationEquivariantConv",
"RotationEquivariantConvLift",
"RotoReflectionEquivariantConv",
Expand Down
4 changes: 4 additions & 0 deletions equiadapt/images/canonicalization_networks/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,8 @@
from equiadapt.images.canonicalization_networks.custom_nonequivariant_networks import (
ConvNetwork,
ResNet18Network,
WideResNet50Network,
WideResNet101Network,
)
from equiadapt.images.canonicalization_networks.escnn_networks import (
ESCNNEquivariantNetwork,
Expand All @@ -34,6 +36,8 @@
"ESCNNWideBasic",
"ESCNNWideBottleneck",
"ResNet18Network",
"WideResNet101Network",
"WideResNet50Network",
"RotationEquivariantConv",
"RotationEquivariantConvLift",
"RotoReflectionEquivariantConv",
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -110,7 +110,7 @@ def __init__(
out_vector_size (int, optional): The size of the output vector of the network. Defaults to 128.
"""
super().__init__()
self.resnet18 = torchvision.models.resnet18(weights=None)
self.resnet18 = torchvision.models.resnet18(weights="DEFAULT")
self.resnet18.fc = nn.Sequential(
nn.Linear(512, out_vector_size),
)
Expand All @@ -128,3 +128,103 @@ def forward(self, x: torch.Tensor) -> torch.Tensor:
torch.Tensor: The output of the network. It has the shape (batch_size, 1).
"""
return self.resnet18(x)


class WideResNet101Network(nn.Module):
"""
This class represents a neural network based on the WideResNetNetwork architecture.

The network uses a pre-trained WideResNet model. The final fully connected layer of the WideResNet101 model is replaced with a new fully connected layer.

Attributes:
resnet18 (torchvision.models.ResNet): The ResNet-18 model.
out_vector_size (int): The size of the output vector of the network.
"""

def __init__(
self,
in_shape: tuple,
out_channels: int,
kernel_size: int,
num_layers: int = 2,
out_vector_size: int = 128,
):
"""
Initializes the ResNet18Network instance.

Args:
in_shape (tuple): The shape of the input data. It should be a tuple of the form (in_channels, height, width).
out_channels (int): The number of output channels of the first convolutional layer.
kernel_size (int): The size of the kernel of the convolutional layers.
num_layers (int, optional): The number of convolutional layers. Defaults to 2.
out_vector_size (int, optional): The size of the output vector of the network. Defaults to 128.
"""
super().__init__()
self.wideresnet = torchvision.models.wide_resnet101_2(weights="DEFAULT")
self.wideresnet.fc = nn.Sequential(
nn.Linear(2048, out_vector_size),
)

self.out_vector_size = out_vector_size

def forward(self, x: torch.Tensor) -> torch.Tensor:
"""
Performs a forward pass through the network.

Args:
x (torch.Tensor): The input data. It should have the shape (batch_size, in_channels, height, width).

Returns:
torch.Tensor: The output of the network. It has the shape (batch_size, 1).
"""
return self.wideresnet(x)


class WideResNet50Network(nn.Module):
"""
This class represents a neural network based on the WideResNetNetwork architecture.

The network uses a pre-trained WideResNet model. The final fully connected layer of the WideResNet50 model is replaced with a new fully connected layer.

Attributes:
resnet18 (torchvision.models.ResNet): The ResNet-18 model.
out_vector_size (int): The size of the output vector of the network.
"""

def __init__(
self,
in_shape: tuple,
out_channels: int,
kernel_size: int,
num_layers: int = 2,
out_vector_size: int = 128,
):
"""
Initializes the ResNet18Network instance.

Args:
in_shape (tuple): The shape of the input data. It should be a tuple of the form (in_channels, height, width).
out_channels (int): The number of output channels of the first convolutional layer.
kernel_size (int): The size of the kernel of the convolutional layers.
num_layers (int, optional): The number of convolutional layers. Defaults to 2.
out_vector_size (int, optional): The size of the output vector of the network. Defaults to 128.
"""
super().__init__()
self.wideresnet = torchvision.models.wide_resnet50_2(weights="DEFAULT")
self.wideresnet.fc = nn.Sequential(
nn.Linear(2048, out_vector_size),
)

self.out_vector_size = out_vector_size

def forward(self, x: torch.Tensor) -> torch.Tensor:
"""
Performs a forward pass through the network.

Args:
x (torch.Tensor): The input data. It should have the shape (batch_size, in_channels, height, width).

Returns:
torch.Tensor: The output of the network. It has the shape (batch_size, 1).
"""
return self.wideresnet(x)
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
canonicalization_type: opt_group_equivariant
network_type: cnn # Options for canonization method 1) cnn 2) wideresnet
network_type: cnn # Options for canonization method 1) cnn 2) non_equivariant_wrn_50 3) non_equivariant_wrn_101 4) non_equivariant_resnet18
network_hyperparams:
kernel_size: 7 # Kernel size for the canonization network
out_channels: 16 # Number of output channels for the canonization network
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -2,3 +2,5 @@ checkpoint_path: ${oc.env:CHECKPOINT_PATH} # Path to save checkpoints
checkpoint_name: "" # Model checkpoint name, should be left empty for training and dynamically allocated later
save_canonized_images: 0 # Whether to save canonized images (1) or not (0)
strict_loading: 1 # Whether to strictly load the model (1) or not (0)
prediction_network_checkpoint_path: null # Path to load prediction network checkpoints
prediction_network_checkpoint_name: null # Path to load prediction network checkpoint name
8 changes: 6 additions & 2 deletions examples/images/classification/inference_utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -61,7 +61,9 @@ def get_inference_metrics(self, x: torch.Tensor, y: torch.Tensor):
]

# check if the accuracy per class is nan
acc_per_class = [0.0 if math.isnan(acc) else acc for acc in acc_per_class]
acc_per_class = [
torch.tensor(0.0) if math.isnan(acc) else acc for acc in acc_per_class
]

# Update metrics with accuracy per class
metrics.update(
Expand Down Expand Up @@ -151,7 +153,9 @@ def get_inference_metrics(self, x: torch.Tensor, y: torch.Tensor):
]

# check if the accuracy per class is nan
acc_per_class = [0.0 if math.isnan(acc) else acc for acc in acc_per_class]
acc_per_class = [
torch.tensor(0.0) if math.isnan(acc) else acc for acc in acc_per_class
]

# Update metrics with accuracy per class
metrics.update(
Expand Down
3 changes: 1 addition & 2 deletions examples/images/classification/model.py
Original file line number Diff line number Diff line change
Expand Up @@ -64,7 +64,7 @@ def training_step(self, batch: torch.Tensor):
assert (num_channels, height, width) == self.image_shape

training_metrics = {}
loss = 0.0
loss, acc = 0.0, 0.0

# canonicalize the input data
# For the vanilla model, the canonicalization is the identity transformation
Expand Down Expand Up @@ -101,7 +101,6 @@ def training_step(self, batch: torch.Tensor):
acc = (preds == y).float().mean()

training_metrics.update({"train/task_loss": task_loss, "train/acc": acc})
training_metrics.update({"train/task_loss": task_loss, "train/acc": acc})

# Add prior regularization loss if the prior weight is non-zero
if self.hyperparams.experiment.training.loss.prior_weight:
Expand Down
8 changes: 4 additions & 4 deletions examples/images/classification/train.py
Original file line number Diff line number Diff line change
Expand Up @@ -74,13 +74,13 @@ def train_images(hyperparams: DictConfig) -> None:

if not hyperparams["experiment"]["run_mode"] == "test":
hyperparams["checkpoint"]["checkpoint_name"] = (
wandb_run.id
str(wandb_run.id)
+ "_"
+ wandb_run.name
+ str(wandb_run.name)
+ "_"
+ wandb_run.sweep_id
+ str(wandb_run.sweep_id)
+ "_"
+ wandb_run.group
+ str(wandb_run.group)
)

# set seed
Expand Down
Loading
Loading