Skip to content
This repository has been archived by the owner on Oct 2, 2024. It is now read-only.

Latest commit

 

History

History
20 lines (12 loc) · 1.45 KB

README.md

File metadata and controls

20 lines (12 loc) · 1.45 KB

ModelGauge

ModelGauge was originally planned to be an evolution of crfm-helm, intended to meet their existing use cases as well as those needed by the MLCommons AI Safety project. However, that project, instead of using a big set of existing tests instead developed a smaller set of custom ones. Because of that, some of this code was moved into the related project MLCommons ModelBench and this repo was archived.

Summary

ModelGauge is a library that provides a set of interfaces for Tests and Systems Under Test (SUTs) such that:

  • Each Test can be applied to all SUTs with the required underlying capabilities (e.g. does it take text input?)
  • Adding new Tests or SUTs can be done without modifications to the core libraries or support from ModelGauge authors.

Currently ModelGauge is targeted at LLMs and single turn prompt response Tests, with Tests scored by automated Annotators (e.g. LlamaGuard). However, we expect to extend the library to cover more Test, SUT, and Annotation types as we move toward full release.

Docs