|
| 1 | +# Specifying Outcome Constraints |
| 2 | +## Introduction |
| 3 | +Outcome constraints can be a crucial component of optimization in Ax. They allow you to specify constraints on the outcomes of your experiment, ensuring that the optimized parameters do not degrade certain metrics. |
| 4 | + |
| 5 | +## Prerequisites |
| 6 | +Before we begin you must instantiate the `Client` and configure it with your experiment and metrics. |
| 7 | + |
| 8 | +We will also assume you are already familiar with [using Ax for ask-tell optimization](#). |
| 9 | + |
| 10 | +```python |
| 11 | +client = Client() |
| 12 | + |
| 13 | +client.configure_experiment(...) |
| 14 | +client.configure_metrics(...) |
| 15 | +``` |
| 16 | + |
| 17 | +## Steps |
| 18 | + |
| 19 | +1. Configure an optimization with outcome constraints |
| 20 | +2. Continue with iteratinf over trials and evaluating them |
| 21 | + |
| 22 | +### 1. Configure an optimization with outcome constraints |
| 23 | +We can leverage the Client's `set_optimization_config` method to configure an optimization with outcome constraints. This method takes an objective string and an outcome constraints string. |
| 24 | + |
| 25 | +Outcome constraints allow us to express a desire to have a metric clear a threshold but not be further optimized. These constraints are expressed as inequalities. |
| 26 | + |
| 27 | +```python |
| 28 | +client.configure_optimization(objective="testObjective", outcome_constraints=["qps >= 100"]) |
| 29 | +``` |
| 30 | + |
| 31 | +To indicate a relative constraint, multiply your bound by "baseline": |
| 32 | +```python |
| 33 | +client.configure_optimization( |
| 34 | + objective="testObjective", |
| 35 | + outcome_constraints="qps >= 0.95 * baseline" |
| 36 | +) |
| 37 | +``` |
| 38 | +This example will constrain the outcomes such that the QPS is at least 95% of the baseline arm's QPS. |
| 39 | + |
| 40 | +Note that scalarized outcome constraints cannot be relative. |
| 41 | + |
| 42 | +### 2. Continue with iterating over trials and evaluating them |
| 43 | +Now that your experiment has been configured for a multi-objective optimization, you can simply continue with iterating over trials and evaluating them as you typically would. |
| 44 | + |
| 45 | +```python |
| 46 | +trial_idx, parameters = client.get_next_trials().popitem() |
| 47 | +client.complete_trial(...) |
| 48 | +``` |
| 49 | + |
| 50 | +## Learn more |
| 51 | + |
| 52 | +Take a look at these other resources to continue your learning: |
| 53 | + |
| 54 | +- [Multi-objective Optimizations in Ax](#) |
| 55 | +- [Scalarized Objective Optimizations with Ax](#) |
0 commit comments