This repository has been archived by the owner on Mar 26, 2023. It is now read-only.
generated from tlverse/tlverse-workshops
-
Notifications
You must be signed in to change notification settings - Fork 4
/
03-intro-tlverse.Rmd
180 lines (146 loc) · 8.86 KB
/
03-intro-tlverse.Rmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
# (PART) Part 2: The `tlverse` {-}
# What is the `tlverse`? {-}
The `tlverse` is a new framework for Targeted Learning in `R`, inspired by
the [`tidyverse` ecosystem](https://tidyverse.org) of `R` packages.
By analogy to the [`tidyverse`](https://tidyverse.org/):
> The `tidyverse` is an opinionated collection of `R` packages designed for data
> science. All packages share an underlying design philosophy, grammar, and data
> structures.
So, the [`tlverse`](https://tlverse.org) is
> An opinionated collection of `R` packages for Targeted Learning sharing an
> underlying design philosophy, grammar, and core set of data structures. The
> `tlverse` aims to provide tools both for building Targeted Learning-based
> data analyses and for implementing novel, state-of-the-art Targeted Learning
> methods.
## Anatomy of the `tlverse` {-}
All Targeted Learning methods are targeted maximum likelihood (or minimum
loss-based) estimators (i.e., TMLEs). The construction of any Targeted Learning
estimator proceeds through a two-stage process:
1. Flexibly learning particular components of the data-generating distribution
often through machine learning (e.g., Super Learning), resulting in _initial
estimates_ of nuisance parameters.
2. Use of a carefully constructed parametric model-based update, via maximum
likelihood estimation (i.e., MLE), incorporating the initial estimates
produced by the prior step to produce a TML estimator.
The packages making up the core components of the `tlverse` software ecosystem
-- `sl3` and `tmle3` -- address the above two goals, respectively. Together, the
general functionality exposed by both allows one to build specific TMLEs
tailored exactly to a particular statistical estimation problem.
The software packages that make up the **core** of the `tlverse` are
* [`sl3`](https://github.com/tlverse/sl3): Modern Super Machine Learning
* _What?_ A modern object-oriented implementation of the Super Learner
algorithm, employing recently developed paradigms in `R` programming.
* _Why?_ A design that leverages modern ideas for faster computation, is
easily extensible and forward-looking, and forms one of the cornerstones of
the `tlverse`.
* [`tmle3`](https://github.com/tlverse/tmle3): An Engine for Targeted Learning
* _What?_ A generalized framework that simplifies Targeted Learning by
identifying and implementing a series of common statistical estimation
procedures.
* _Why?_ A common interface and engine that accommodates current algorithmic
approaches to Targeted Learning and yet remains a flexible enough engine to
power the implementation of emerging statistical techniques as they are
developed.
Beyond these engines that provide the driving force behind the `tlverse`, there
are a few supporting packages that play important roles in the **background**:
* [`origami`](https://github.com/tlverse/origami): A Generalized Framework for
Cross-Validation [@coyle2018origami]
* _What?_ A generalized framework for flexible cross-validation.
* _Why?_ Cross-validation is a key part of ensuring error estimates are honest
and in preventing overfitting. It is an essential part of both the Super
Learner ensemble modeling algorithm and in the construction of TML
estimators.
* [`delayed`](https://github.com/tlverse/delayed): Parallelization Framework for
Dependent Tasks
* _What?_ A framework for delayed computations (i.e., futures) based on task
dependencies.
* _Why?_ Efficient allocation of compute resources is essential when deploying
computationally intensive algorithms at large scale.
A key principle of the `tlverse` is extensibility. That is, the software
ecosystem aims to support the development of novel Targeted Learning estimators
as they reach maturity. To achieve this degree of flexibility, we follow the
model of implementing new classes of estimators, for distinct causal inference
problems in separate packages, all of which rely upon the core machinery
provided by `sl3` and `tmle3`. There are currently three examples:
* [`tmle3mopttx`](https://github.com/tlverse/tmle3mopttx): Optimal Treatments
in the `tlverse`
* _What?_ Learn an optimal rule and estimate the mean outcome under the rule.
* _Why?_ Optimal treatments are a powerful tool in precision healthcare and
other settings where a one-size-fits-all treatment approach is not
appropriate.
* [`tmle3shift`](https://github.com/tlverse/tmle3shift): Stochastic Shift
Interventions based on Modified Treatment Policies in the `tlverse`
* _What?_ Stochastic shift interventions for evaluating changes in
continuous-valued treatments.
* _Why?_ Not all treatment variables are binary or categorical. Estimating the
total effects of intervening on continuous-valued treatments provides a way
to probe how an effect changes with shifts in the treatment variable.
* [`tmle3mediate`](https://github.com/tlverse/tmle3mediate): Causal Mediation
Analysis in the `tlverse`
* _What?_ Techniques for evaluating the direct and indirect effects of
treatments through mediating variables.
* _Why?_ Evaluating the total effect of a treatment does not provide
information about the pathways through which it may operate. When mediating
variables have been collected, one can instead evaluate direct and indirect
effect parameters that speak to the _action mechanism_ of the treatment.
## Reproduciblity with the `tlverse` {-}
The `tlverse` software ecosystem is a growing collection of packages, several of
which are quite early on in the software lifecycle. The team does its best to
maintain backwards compatibility. Once this work reaches completion, the
specific versions of the `tlverse` packages used will be archived and tagged to
produce it.
This book was written using [bookdown](http://bookdown.org/), and the complete
source is available on [GitHub](https://github.com/tlverse/tlverse-handbook).
This version of the book was built with `r R.version.string`,
[pandoc](https://pandoc.org/) version `r rmarkdown::pandoc_version()`, and the
following packages:
```{r pkg-list, echo = FALSE, results="asis"}
# borrowed from https://github.com/tidymodels/TMwR/blob/master/index.Rmd
deps <- desc::desc_get_deps()
pkgs <- sort(deps$package[deps$type == "Imports"])
pkgs <- sessioninfo::package_info(pkgs, dependencies = FALSE)
df <- tibble::tibble(
package = pkgs$package,
version = pkgs$ondiskversion,
source = gsub("@", "\\\\@", pkgs$source)
)
knitr::kable(df, format = "markdown")
```
# Primer on the `R6` Class System {-}
The `tlverse` is designed using basic object oriented programming (OOP)
principles and the [`R6` OOP framework](https://adv-r.hadley.nz/r6.html). While
we've tried to make it easy to use the `tlverse` packages without worrying much
about OOP, it is helpful to have some intuition about how the `tlverse` is
structured. Here, we briefly outline some key concepts from OOP. Readers
familiar with OOP basics are invited to skip this section.
## Classes, Fields, and Methods {-}
The key concept of OOP is that of an object, a collection of data and functions
that corresponds to some conceptual unit. Objects have two main types of
elements:
1. _fields_, which can be thought of as nouns, are information about an object,
and
2. _methods_, which can be thought of as verbs, are actions an object can
perform.
Objects are members of classes, which define what those specific fields and
methods are. Classes can inherit elements from other classes (sometimes called
base classes) -- accordingly, classes that are similar, but not exactly the
same, can share some parts of their definitions.
Many different implementations of OOP exist, with variations in how these
concepts are implemented and used. `R` has several different implementations,
including `S3`, `S4`, reference classes, and `R6`. The `tlverse` uses the `R6`
implementation. In `R6`, methods and fields of a class object are accessed using
the `$` operator. For a more thorough introduction to `R`'s various OOP systems,
see http://adv-r.had.co.nz/OO-essentials.html, from Hadley Wickham's _Advanced
R_ [@wickham2014advanced].
## Object Oriented Programming: `Python` and `R` {-}
OO concepts (classes with inheritance) were baked into Python from the first
published version (version 0.9 in 1991). In contrast, `R` gets its OO "approach"
from its predecessor, `S`, first released in 1976. For the first 15 years, `S`
had no support for classes, then, suddenly, `S` got two OO frameworks bolted on
in rapid succession: informal classes with `S3` in 1991, and formal classes with
`S4` in 1998. This process continues, with new OO frameworks being periodically
released, to try to improve the lackluster OO support in `R`, with reference
classes (`R5`, 2010) and `R6` (2014). Of these, `R6` behaves most like Python
classes (and also most like OOP focused languages like C++ and Java), including
having method definitions be part of class definitions, and allowing objects to
be modified by reference.