implementation of training on the per atom target quantities #101
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Hello,
This PR is a first attempt at the implementation of training the models (SOAP-BPNN and alchemical) on the per atom targets for extensive quantities (e.g. energy) rather than the raw extensive values. Usefulness of this, apart from personal preferences in model training, is that it allows one to retain consistency between models in
metatensor-models
and default training behavior ofMACE
,NequIP
, and other packages.In the proposed solution, changes are summarized as follows:
peratom_targets
, a list of strings that contain targets that should be trained on the per atom values, is defined and accepted as an input tocomput_model_loss
function ofcompute_loss.py
. The list can be supplied in the input yaml file.compute_model_loss
function, model predictions and the target values are divided by the number of atoms before it is passed toloss
thenMSELoss
.I understand this may not be the most optimal solution to this. All suggestions are welcome in refactoring the feature.
Resolves #95
📚 Documentation preview 📚: https://metatensor-models--101.org.readthedocs.build/en/101/