-
Notifications
You must be signed in to change notification settings - Fork 1
/
Copy pathdca-tutorial.Rmd
440 lines (341 loc) · 24.8 KB
/
dca-tutorial.Rmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
---
title: "Decision Curve Analysis Tutorial"
output:
html_document:
toc: true
toc_float: true
includes:
in_header: "highlight/header.html"
css: "highlight/styles/agate.min.css"
params:
language: r
editor_options:
markdown:
wrap: sentence
---
**This tutorial has moved. Visit [http://www.decisioncurveanalysis.org](http://www.decisioncurveanalysis.org) for the latest updates.**
```{r setup, include=FALSE}
knitr::opts_chunk$set(echo = TRUE, warning=FALSE, fig.height = 3)
options(knitr.duplicate.label = "allow")
gtsummary::theme_gtsummary_compact()
library(dcurves)
library(tidyverse)
library(gtsummary)
add_tabset_codes <- function(chunk_label, eval = TRUE, echo = FALSE,
message = FALSE, include = TRUE,
language = params$language) {
active_r <- ifelse(language %in% "r", "{.active}", "")
active_stata <- ifelse(language %in% "stata", "{.active}", "")
active_sas <- ifelse(language %in% "sas", "{.active}", "")
active_python <- ifelse(language %in% "python", "{.active}", "")
res <-
knitr::knit_child(
text = unlist(knitr::knit_expand('templates/tabset-template.Rmd')),
quiet = TRUE,
envir = rlang::caller_env()
)
cat(res, sep = '\n')
}
.convert_chunks_to_script <- function(language = params$language) {
# convert language name to comment symbols
comment_start <-
switch(language,
"sas" = "/*",
"r" = "#",
"stata" = "/*",
"python" = "#")
comment_end <-
switch(language,
"sas" = "*/",
"r" = "",
"stata" = "*/",
"python" = "")
extension <-
switch(language,
"sas" = ".sas",
"r" = ".R",
"stata" = ".do",
"python" = ".py")
readr::read_lines(here::here("rmd_chunks", paste0(language, "-chunks", extension))) %>%
map_chr(
~ifelse(
# edit lines that are the chunk names
startsWith(., "## ----") & endsWith(., "-----"),
# replace the ## with the start comment symbols
str_replace(., "^##", fixed(comment_start)) %>%
# add end of comment symbol
paste(comment_end) %>%
# remove language name
str_replace(fixed(paste0(language, "-")), ""),
.
)
) %>%
readr::write_lines(
here::here("rmd_chunks", "chunks_as_scripts", paste0("dca-script", extension))
)
}
# save rmd chunks as a script
.convert_chunks_to_script()
# create link to download scripts
extension <- switch(params$language,
"sas" = ".sas", "r" = ".R",
"stata" = ".do", "python" = ".py")
script_url <-
paste0(
"https://github.com/ddsjoberg/dca-tutorial/raw/main/rmd_chunks/chunks_as_scripts/",
"dca-script", extension
)
```
```{r, read_chunk, include=FALSE}
knitr::read_chunk("rmd_chunks/r-chunks.R")
knitr::read_chunk("rmd_chunks/stata-chunks.do")
knitr::read_chunk("rmd_chunks/sas-chunks.sas")
knitr::read_chunk("rmd_chunks/python-chunks.py")
```
## Introduction
Diagnostic and prognostic models are typically evaluated with measures of accuracy, such as the area-under-the-curve (AUC) or the Brier score, that do not address clinical consequences.
Decision-analytic techniques allow assessment of clinical outcomes but generally require collection of extensive additional information and are cumbersome to apply to models that yield a risk estimate on a continuous scale from 0 to 1.
Decision curve analysis allows incorporation of clinical consequences, thus addressing the question of whether a model would do more good than harm, but is able to do so without excessive extraneous data.
This document will walk you through how to perform a decision curve analysis in a variety of different settings, and then how to interpret the resulting curves.
In decision curve analysis, models are compared to two default strategies: 1) assume that all patients are test positive and therefore treat everyone, or 2) assume that all patients are test negative and offer treatment to no one.
"Treatment" is considered in the widest possible sense, not only drugs, radiotherapy or surgery, but advice, further diagnostic procedures or more intensive monitoring.
For more on DCA, visit [www.decisioncurveanalysis.org](https://www.mskcc.org/departments/epidemiology-biostatistics/biostatistics/decision-curve-analysis): you'll find the original articles explaining the theory and mathematical derivation of net benefit along with papers justifying the advantages of decision curve analysis over other methods of model evaluation.
Below we will walk through how to perform decision curve analysis for binary and time-to-event outcomes using **R**, **Stata**, **SAS**, and **Python**.
Code is provided for all languages and can be downloaded or simply copy and pasted into your application to see how it runs.
For simplicity's sake, however, we only show output from the R functions; although, naturally, output is very similar irrespective of programming language. .
## Select a language
```{r, language-table, echo = FALSE}
tibble::tibble(
r = list(gt::html('<a href="dca-tutorial-r.html"><img src="images/r-icon.png" height="100" /></a>')),
stata = list(gt::html('<a href="dca-tutorial-stata.html"><img src="images/stata-icon.png" height="55" /></a>')),
sas = list(gt::html('<a href="dca-tutorial-sas.html"><img src="images/sas-icon.png" height="55" /></a>')),
python = list(gt::html('<a href="dca-tutorial-python.html"><img src="images/python-icon.png" height="55" /></a>')),
) |>
gt::gt() |>
gt::cols_label(
r = gt::html('<b><a href="dca-tutorial-r.html"><font color="black">R</a></b>'),
stata = gt::html('<b><a href="dca-tutorial-r.html"><font color="black">Stata</a></b>'),
sas = gt::html('<b><a href="dca-tutorial-r.html"><font color="black">SAS</a></b>'),
python = gt::html('<b><a href="dca-tutorial-r.html"><font color="black">Python</a></b>')
) |>
gt::cols_width(everything() ~ gt::px(150)) |>
gt::opt_table_lines("none") |>
gt::tab_style(
style = gt::cell_borders(
sides = "top",
color = "#000000",
weight = gt::px(4)
),
locations = gt::cells_column_labels(all_of(params$language))
)
```
```{r, results='asis', echo = FALSE}
str_glue('<a href="{script_url}" download target="_blank"><b>Download DCA Code</b></a>')
```
## Install & Load
Use the scripts below to install the decision curve analysis functions and/or load them for use.
```{r, install, echo=FALSE, results='asis'}
add_tabset_codes("install", eval = FALSE)
```
## Binary Outcomes
### Motivating Example
We will be working with an example data set containing information about cancer diagnosis.
The data set includes information on `r nrow(df_binary)` patients who have recently discovered they have a gene mutation that puts them at a higher risk for harboring cancer.
Each patient has been biopsied and we know their cancer status.
It is known that older patients with a family history of cancer have a higher probability of harboring cancer.
A clinical chemist has recently discovered a marker that she believes can distinguish between patients with and without cancer.
We wish to assess whether or not the new marker does indeed distinguish between patients with and without cancer.
If the marker does indeed predict well, many patients will not need to undergo a painful biopsy.
### Data Set-up
We will go through step by step how to import your data, build models based on multiple variables, and use those models to obtain predicted probabilities.
The first step is to import your data, label the variables and produce a table of summary statistics.
The second step is you’ll want to begin building your model.
As we have a binary outcome (i.e. the outcome of our model has two levels: cancer or no cancer), we will be using a logistic regression model.
```{r, import_cancer, echo=FALSE, message = FALSE, results='asis'}
add_tabset_codes("import_cancer")
```
### Univariate Decision Curve Analysis
First, we want to confirm family history of cancer is indeed associated with the biopsy result.
```{r, model, echo=FALSE, message = FALSE, results='asis'}
add_tabset_codes("model")
```
Via logistic regression with cancer as the outcome, we can see that family history is related to biopsy outcome with odds ratio `r inline_text(mod1_summary, variable = "famhistory")`.
The decision curve analysis can help us address the clinical utility of using family history to predict biopsy outcome.
```{r, dca_famhistory, echo=FALSE, message = FALSE, results='asis'}
add_tabset_codes("dca_famhistory")
```
First, note that there are many threshold probabilities shown here that are not of interest.
For example, it is unlikely that a patient would demand that they had at least a 50% risk of cancer before they would accept a biopsy.
Let's do the decision curve analysis again, this time restricting the output to threshold probabilities a more clinically reasonable range.
We think it would be not make sense if a patient opted for biopsy if their risk of cancer was less than 5%; similarly, it would be irrational not to get a biopsy if risk was above 35%.
So we want to look at the range between 5% and 35%.
Because 5% is pretty close to 0%, we will show the range between 0% and 35%.
```{r, dca_famhistory2, echo=FALSE, message = FALSE, results='asis'}
add_tabset_codes("dca_famhistory2")
```
Now that the graph is showing a more reasonable range of threshold probabilities, let's assess the clinical utility of family history alone.
We can see here that although family history is significantly associated with biopsy outcome, it only adds value to a small range of threshold probabilities near 13% - 20%.
If your personal threshold probability is 15% (i.e. you would only undergo a biopsy if your probability of cancer was 15% or more), then family history alone can be beneficial in making the decision to undergo biopsy.
However, if your threshold probability is less than 13% or higher than 20%, then using family history to decide on biopsy has less benefit than choosing to biopsy or not biopsy respectively.
Hence, we would conclude that using family history to determine who to biopsy would help some patients but harm some others, and so should not be used in the clinic.
### Multivariable Decision Curve Analysis
#### Evaluation of New Models
We want to examine the value of a statistical model that incorporates family history, age, and the marker.
First, we will build the logistic regression model with all three variables, and then save the predicted probability of having cancer based on the model.
Note that in our example data set, this variable actually already exists so it wouldn't be necessary to create the predicted probabilities once again.
```{r, model_multi, echo=FALSE, message = FALSE, results='asis'}
add_tabset_codes("model_multi")
```
We now want to compare our different approaches to cancer detection: biopsying everyone, biopsying no-one, biopsying on the basis of family history, or biopsying on the basis of a multivariable statistical model including the marker, age and family history of cancer.
```{r, dca_multi, echo=FALSE, message = FALSE, results='asis'}
add_tabset_codes("dca_multi")
```
The key aspect of decision curve analysis is to look at which strategy leads to the largest net benefit (i.e. the "highest" line), which in this example would correspond to the model that includes age, family history of cancer, and the marker.
It is clear that the statistical model has the highest net benefit across the entire range of threshold probabilities.
Accordingly, we would conclude that using the model to decide whether to biopsy a patient would lead to better clinical outcomes.
The decision curve for the model should in theory be a smooth curve with net benefit always going down (or staying the same) as you move from left to right.
But the curve is a bit bumpy and sometimes even increases a bit, a result of statistical noise.
Some researchers prefer to show a smoothed curve and there are two options for doing so.
First, you can add a smoother.
Note that different programs use different smoothers as there is no one smoother that is best in every situation.
As such, results of a smoothed curve should always be compared with the unsmoothed curve to ensure that it gives a fair representation of the data.
```{r, dca_smooth, echo=FALSE, message = FALSE, results='asis'}
add_tabset_codes("dca_smooth")
```
An alternative is to calculate net benefit at wider intervals.
By default, the software calculates net benefit at every 1% of threshold probability, but we this can cause artifacts unless the data set is very large.
You can specify instead to calculate net benefit every 5%.
```{r, dca_smooth2, echo=FALSE, message = FALSE, results='asis'}
add_tabset_codes("dca_smooth2")
```
You can also combine the two methods of smoothing together.
A few additional points are worth noting.
First, look at the green line, the net benefit for "treat all", that is, biopsy everyone.
This crosses the y axis at the prevalence.
Imagine that a man had a risk threshold of 14%, and asked his risk under the "biopsy all" strategy.
He would be told that his risk was the prevalence (14%) and so would be undecided about whether to biopsy or not.
When a patient's risk threshold is the same as his predicted risk, the net benefit of biopsying and not biopsying are the same.
Second, the decision curve for the binary variable (family history of cancer, the brown line) crosses the "biopsy all men" line at 1 -- negative predictive value.
Again, this is easily explained: the negative predictive value is 87%, so a patient with no family history of cancer has a probability of disease of 13%; a patient with a threshold probability less than this -- for example, a patient who would opt for biopsy even if risk was 10% - should therefore be biopsied even if they had no family history of cancer.
The net benefit for a binary variable is equivalent to biopsy no-one at the positive predictive value.
This is because for a binary variable, a patient with the characteristic is given a risk at the positive predictive value.
#### Evaluation of Published Models
Imagine that a model was published by Brown et al. with respect to our cancer biopsy data set.
The authors reported a statistical model with coefficients of 0.75 for a positive family history of cancer; 0.26 for each increased year of age, and an intercept of -17.5.
To test this formula on our data set:
```{r, pub_model, echo=FALSE, message = FALSE, results='asis'}
add_tabset_codes("pub_model")
```
This decision curve suggests that although the model might be useful in the most risk averse patients, it is actually harmful in patients with more moderate threshold probabilities, that is, it has a net benefit lower than a reasonable clinical alternative.
As such, the Brown et al. model should not be used in clinical practice.
This effect, a model being harmful, occurs due to miscalibration, that is, when patients are given risks that are too high or too low.
Note that miscalibration only occurs rarely when models are created and tested on the same data set, such as in the example where we created a model with both family history and the marker.
#### Saving out Net Benefit Values
We can save out net benefit for a decision curve for further analysis or presentation in a table.
We simply need to specify the name of the file we'd like it to be saved as.
For a particular range of threshold probabilities, we would only need to specify what threshold to start, stop, and the increment we'd like to use.
To assess or display the increase in net benefit we'd simply need to subtract the net benefit of the model based on treating all patients from the net benefit of our model.
Let us imagine that we want to view the net benefit of using only the marker to predict whether a patient has cancer, compared with the net benefits of biopsying all patients at thresholds of 5%, 10%, 15% ... 35%.
For the model itself, we would actually need to first specify that the marker variable -- unlike those of any of the models before -- is not a probability.
Based on our thresholds, we'd want to begin at 0.05, and by increments of 0.05, stop at 0.35.
As we are not interested in the graph, we can also specify to suppress the graph from being displayed.
```{r, dca_table, echo=FALSE, message = FALSE, results='asis'}
add_tabset_codes("dca_table")
```
The saved table lists the range of threshold probability that we specified, followed by the net benefits of treating all, none, our specified model, intervention avoided (we discuss this below), and the newly created variable which represent the increase in net benefits of our model using only the marker.
Net benefit has a ready clinical interpretation.
The value of 0.03 at a threshold probability of 20% can be interpreted as: "Comparing to conducting no biopsies, biopsying on the basis of the marker is the equivalent of a strategy that found 3 cancers per hundred patients without conducting any unnecessary biopsies."
#### Interventions Avoided
In many cases, the default clinical strategy is to intervene on all patients, and thus the aim of a model or marker is to help reduce unnecessary interventions.
In contrast to the traditional method, where net benefit is calculated in terms of the net increase in true positives compared to no treatment, we can also calculate net benefit in terms of a net decrease in false positives compared to the "treat all" strategy.
In our cancer biopsy example, for instance, we might interested in whether we could reduce the number of biopsies.
in the table that was saved out.
To view decrease in interventions graphically, we would only need to specify it in our command.
```{r, dca_intervention, echo=FALSE, message = FALSE, results='asis'}
add_tabset_codes("dca_intervention")
```
At a probability threshold of 10%, the net reduction in interventions is about 0.15.
In other words, at this probability threshold, biopsying patients on the basis of the marker is the equivalent of a strategy that led to an absolute 15% reduction in the number of biopsies without missing any cancers.
You can also report the net interventions avoided per 100 patients (or any number of patients).
## Survival Outcomes
### Motivating Example
Patients had a marker measured and were followed to determine whether they were eventually diagnosed with cancer, as well as the time to that diagnosis or censoring.
We want to build a model of our own based on age, family history, and the marker, and assess how well the model predicts cancer diagnosis within 1.5 years.
### Basic Data Set-up
We first need to import our data.
```{r, import_ttcancer, echo=FALSE, message = FALSE, results='asis'}
add_tabset_codes("import_ttcancer")
```
The survival probability to any time-point can be derived from any type of survival model; here we use a Cox as this is the most common model in statistical practice.
You don’t necessarily need to know this to run a decision curve, but the formula for a survival probability from a Cox model is given by:
$$
S(t|X) = S_0(t)^{e^{X\beta}}
$$
Where $X$ is matrix of covariates in the Cox model, $\beta$ is a vector containing the parameter estimates from the Cox model, and $S_0(t)$ is the baseline survival probability to time t.
To run a decision curve analysis, we will create a Cox model with age, family history, and the marker, as predictors, save out the baseline survival function in a new variable, and obtaining the linear prediction from the model for each subject.
We then obtain the baseline survival probability to our time point of interest.
If no patient was observed at the exact time of interest, we can use the baseline survival probability to the observed time closest to, but not after, the time point.
We can then calculate the probability of failure at the specified time point.
For our example, we will use a time point of 1.5, which would corresponds to the eighteen months that we are interested in.
```{r, coxph, echo=FALSE, message = FALSE, results='asis'}
add_tabset_codes("coxph")
```
The code for running the decision curve analysis is straightforward after the probability of failure is calculated.
All we have to do is specify the time point we are interested in.
For our example, let us not only set the threshold from 0% to 50%, but also add smoothing.
```{r, stdca_coxph, echo=FALSE, message = FALSE, results='asis'}
add_tabset_codes("stdca_coxph")
```
This shows that using the model to inform clinical decisions will lead to superior outcomes for any decision associated with a threshold probability of above 2% or so.
### Competing Risks
At times, data sets are subject to competing risks.
For example in our cancer data set, patients may have died prior to cancer diagnosis.
To run a competing risk analysis, we first create a failure variable that indicates which patients died before a cancer diagnosis and then run a survival time decision curve analysis.
```{r, stdca_cmprsk, echo=FALSE, message = FALSE, results='asis'}
add_tabset_codes("stdca_cmprsk")
```
## Case-Control Designs
An issue with applying decision curve analysis to case-control data is that net benefit depends on prevalence, and prevalence in case-control studies is fixed by design.
When working with case-control data, you can pass the population prevalence to accurately calculate the net benefit.
```{r, import_case_control, echo=FALSE, message = FALSE, results='asis'}
add_tabset_codes("import_case_control")
```
In the example below, the population prevalence is 20%.
```{r, dca_case_control, echo=FALSE, message = FALSE, results='asis'}
add_tabset_codes("dca_case_control")
```
The interpretation of the decision curve analysis figure is the same as for a cohort study.
In this example, the model performs worse than a biopsy all strategy from threshold 0% to 17%, and the model has benefit from 18% to 40%.
As the model does not have the highest net benefit across the full range of reasonable threshold probabilities, it is not justified to use it in practice.
## Incorporating Harms
To incorporate the harm associated with obtaining the marker, we ask a clinician, who tells us that, even if the marker were perfectly accurate, few clinicians would conduct more than 30 tests to predict one cancer diagnosis.
This might be because the test is expensive, or requires some invasive procedure to obtain.
The "harm" of measuring the marker is the reciprocal of 30, or 0.0333.
```{r, dca_harm_simple, echo=FALSE, message = FALSE, results='asis'}
add_tabset_codes("dca_harm_simple")
```
This shows the net benefit of the model is no longer superior across the full range of reasonable threshold probabilities.
Hence the benefit of using the model does not outweigh the harm of having to collect the marker needed by the model.
Many decisions in medicine are based on conditional test results, however.
A classic example is where patients are categorized on the basis of a test as being at high, low or intermediate risk.
Patients at high risk are referred immediately for treatment (in our example biopsied); patients at low risk are reassured and advised that no further action is necessary; patients at intermediate risk are sent for an additional test, with subsequent treatment decisions made accordingly.
We have to calculate harm specially for the conditional test, because only patients at intermediate risk are measured for the marker.
Then incorporate it into our decision curve.
The strategy for incorporating harm for the conditional test is by multiplying the proportion scanned by the harm of the scan.
```{r, dca_harm, echo=FALSE, message = FALSE, results='asis'}
add_tabset_codes("dca_harm", eval = FALSE)
```
## Correction for Overfit
As is well known, evaluating a model on the same data set that was used to generate the model will give overoptimistic estimates of model performance.
One way to correct for this sort of overfit is by using 10-fold cross validation.
The key steps are as follows:
1. Randomly divide the data set into 10 groups of equal size, with equal numbers of events in each group.
2. Fit the model using all but the first group.
3. Apply the model created from all observations other than those in the first group (in step 2), to the first group to obtain the predicted probability of the event.
4. Repeat steps (2) and (3) leaving out and then applying the fitted model for each of the groups. Every subject now has a predicted probability of the event derived from a model that was generated from a data set that did not include that subject.
5. Using the predicted probabilities, compute the net benefit at various threshold probabilities.
6. One approach is to repeat this process many times and take an average (repeated 10-fold cross validation). Although this is not always done, we will include code for it here. Repeat steps (1) to (5) 200 times. The corrected net benefit for each threshold probability is the mean across the 200 replications.
```{r, cross_validation, echo=FALSE, message = FALSE, results='asis'}
add_tabset_codes("cross_validation")
```
Note that the predicted probabilities for each subject can be used to calculate other performance metrics, such as discrimination.