Structure-Agnostic Causal Estimation
We have another new technical blog post, courtesy Jikai Jin and Vasilis Syrgkanis, about optimality of double machine learning for causal inference.
An introduction to causal inference
Causal inference deals with the fundamental question of “what if”, trying to estimate/predict the counterfactual outcome that one does not directly observe. For instance, one may want to understand the effect of a new medicine on a population of patients. For each patient, we never simultaneously observe the outcome under the new medicine (treatment) and the outcome under the baseline treatment (control). This makes causal inference a challenging task, and the ground-truth causal parameter of interest is identifiable only under additional assumptions on the data generating process.
The most central quantity of interest in the causal inference literature is the Average Treatment Effect (ATE). To mathematically define the ATE, we will use the language of potential outcomes. We posit that nature generates two potential outcomes , where can be thought as the outcome we would have observed from unit , had we treated them with treatment . Then the ATE is defined as the average difference of these two potential outcomes in the population:
.
Unless otherwise specified, we will always use a subscript to denote the ground-truth quantity. The main problem is that for each unit we do not observe both potential outcomes. Rather, we observe the potential outcome for the assigned treatment, .
The first key question in causal inference is the identification question: can we write the ATE, which depends on the distribution of unobserved quantities, as a function of the distribution of observed random variables? Many techniques have been developed in causal inference that solve the identification question under various assumptions on the data generating process and the kinds of variables that are observed. For the interested reader, one can search for terms such as identification by conditioning, instrumental variables, proximal causal inference, difference-in-differences, regression discontinuity and synthetic controls, and refer to related textbooks [AP09,CHK+24].
For the purpose of this blog we will focus on identification by conditioning, which has been well-studied in the literature and very frequently used in the practice of causal inference. This identification approach makes the assumption that, once we condition on a large enough set of observed characteristics (typically referred to as “control variables” or “confounders”), the treatment is assigned as if it was a randomized trial; a condition typically referred to as the conditional ignorability assumption
.
Under this assumption, the ATE is identifiable via the well-known g-formula:
.
where the function is a regression function and is thus uniquely determined by the distribution of observed data. Intuitively, this formula says: train a predictive model that predicts the outcome from the treatment and the control variables and then take the average difference of the predictions of this model as you flip the treatment variable on or off. This quantity is also strongly related to the partial dependence plot, used frequently in interpretable machine learning. It basically corresponds to the difference of the value of the partial dependence plot of the outcome with the treatment, when the treatment takes value one vs when the treatment takes value zero.
The causal machine learning paradigm
The second key question in causal inference is the estimation question: given samples of the observed variables, how should we estimate the ATE? In other words, we need to translate the identification strategy into an estimation strategy. For instance, in the context of identification by conditioning, note that even though our goal is to estimate , to achieve that we also need to estimate the complicated non-parametric regression function . Such auxiliary functions, whose estimation is required in order to estimate the target parameter of interest, are referred to as nuisance functions. The requirement to estimate complicated nuisance functions in a flexible manner arises in most identification strategies in causal inference and this is exactly where machine learning techniques can be of great help, giving rise to the Causal Machine Learning paradigm.
At a high level, causal machine learning is an emerging research area that incorporates machine learning (ML) techniques into statistical problems that emerge in causal inference. In the past decade, ML has gained tremendous success on numerous tasks, such as image classification, language processing, and video games. These problems more or less possess certain intrinsic structures that one can exploit. In image classification problems, for example, semantically meaningful objects can typically be found locally as a combination of pixels, and this suggests that using convolutional neural networks, rather than standard feed-forward neural networks, might lead to better results. The idea of causal machine learning is to leverage the ability of ML techniques to adapt to intrinsic notions of dimension, when learning the complex nuisance quantities that arise in causal identification strategies.
Double/debiased machine learning: an overview
What makes causal ML different from ML? To answer this question, it is instructive to revisit an extremely popular algorithm in causal ML: double/debiased machine learning (DML)[CCD+17] (variants of the ideas we will present below have also appeared in the targeted learning literature [LR11], but for simplicity of exposition we adopt the DML paradigm in this blogpost).
Suppose that we are given i.i.d. data where is a high-dimensional covariate vector, is a binary treatment variable and is an outcome of interest. Without loss of generality, we can describe the data generating process of these variables via the following nonparametric regression equations:
.
where is known as the outcome regression and is known as the propensity score. Let be the distribution of . Then the ATE problem asks us to estimate the quantity .
The ATE is just one example of a broad class of causal parameter estimation problems, for which the ground-truth parameter satisfies some moment equation
,
where is some moment function, is the observed data, is a subvector of and is the ground-truth nuisance function. In the case of ATE, we can for example choose , and . Given this expression, a naive approach for estimating can be derived by first using ML to fit an estimate of the ground-truth nuisance functions , and then solve the empirical moment equation
.
However, the resulting estimate would be biased if the nuisance estimates are biased. The latter happens quite often in practice, since ML typically requires using regularization to prevent the model from overfitting. As a result, it would be desirable if the quality of our estimate is more robust to nuisance estimation errors.
The key observation is that this would be the case if a Neyman orthogonality condition holds, namely
,
where denotes the functional derivative of .
Intuitively, this condition implies that the induced error is less sensitive to misspecification of nuisance function. Then a simple Taylor expansion would imply that the estimation error solved from the empirical moment equation would have second-order dependency on the nuisance errors.
In the case of ATE, the moment function
satisfies such requirements, where and . Given a dataset , we can split it into two datasets and each with samples. Then DML consists of the following two stages:
- Use our favorite ML method (e.g. Lasso, random forest, neural network etc.) to estimate and .
- Solve for from the empirical moment equation
where is our first-stage nuisance estimate.
Note that the main reason why DML would improve over the naive approach is that the moment function is chosen to satisfy the Neyman orthogonality property. By contrast, one can easily verify that the moment function is not Neyman orthogonal.
It is well-known that for the case of the ATE, the DML approach also possesses double robustness properties; a property that dates back to the seminal work of [RRZ94]. In fact the resulting estimator is the well-known doubly robust estimator [RRZ94] with the extra element of sample splitting, when estimating the nuisance functions. Specifically, suppose that our first-stage nuisance estimates have mean-square errors and respectively, then under mild regularity assumptions, the DML estimate satisfies
with high probability. Intuitively, because the estimation error of stems from the misspecification of nuisance functions in the moment equation, by Taylor’s formula, it would contain the term if and only if . By calculating the functional derivatives, it is then easy to check that is the dominating term. In particular, Neyman orthogonality implies that all first-order error terms vanish.
Importantly, this guarantee is structure-agnostic: this rate does not rely on any structural assumptions on the nuisance functions. What we need to assume is merely access to black-box ML estimates with some mean-squared error bounds. This is the reason why DML is widely adopted in practice: while there exist alternative estimators that can achieve improved error rates under structural assumptions on the non-parametric components, these assumptions can easily be violated, making these estimators cumbersome to deploy.
The problems that causal ML studies are not new. In the non-parametric estimation literature, there have been extensive results that focus on non-parametric efficiency and optimal rates for estimating causal quantities, under structural assumptions on the model such as smoothness of the non-parametric parts of the data generating process [RLM17,KBRW22]. However, the causal ML approach takes a more structure-agnostic view on the estimation of these nuisance quantities, and essentially solely assumes access to a good black-box oracle that provides us with relatively accurate estimates. This naturally gives rise to the structure-agnostic minimax optimality framework.
The structure-agnostic framework
We have seen that the key characteristic that differentiates the causal ML approach to estimation (e.g. the DML approach) with the traditional approaches is its structure-agnostic nature. In this section, we discuss the structure-agnostic framework that allows us to compare the performance of structure-agnostic estimators. This framework was originally proposed by [BKW23].
To keep things simple, we restrict ourselves to the same setting as the previous section. Now suppose we have nuisance estimates with mean-square errors and . The structure agnostic minimax optimality framework asks the following question: if we don’t make any further restriction on the data generating process other than the fact that we have access to estimates for the nuisance functions that have mean-squared-error that is upper bounded of and , then what is the best estimation rate that is achievable by any estimation method?
To formalize this, we define the uncertainty set, as the set containing all distributions that are consistent with the given estimators:
where is the marginal distribution of . Here we restrict ourselves to the case where and are binary. This additional constraint would only strengthen our minimax lower bounds presented in this blog. In this case, each tuple uniquely determines a distribution over observational data. For any set , we define the minimax quantile risk for estimating the ATE by
,
where and are the data distribution and the ATE induced by respectively, and is the quantile function under data distribution . Clearly, our previous discussions of DML implies that the worst-case risk is at most . This framework precisely captures the main idea behind causal ML estimators that we described in the previous section.
Main results
In this section, we introduce our main results on structure-agnostic lower bounds [JS24]. Prior to our work, the only known structure agnostic lower bounds were established in [BKW23]. In their paper, it is shown that DML is optimal for estimating a set of functionals of interest, which relate to the ATE but which do not include the ATE functional.
Our first result establishes the optimality of DML for estimating the ATE, i.e., the doubly robust estimator with sample splitting, achieves the statistically optimal rate. As discussed in the previous section, the DML estimator for the ATE is given by
and has the structure-agnostic rate of . We now establish a matching lower bound.
Theorem 1. Let and contains all distributions in with marginal distribution of being uniform. For any constant , if our nuisance estimates take values in , where is a constant, then
Interestingly, knowing the marginal distribution of would not change the statistical limit.
We also consider another important causal parameter, the average treatment effect of the treated (ATT), defined by . Under conditional ignorability, it can be written as
.
The DML estimate of is
and can be shown to achieve the same rate as for ATE. We also show that this rate is unimprovable:
Theorem 2. In the same setting as Theorem 1, we have
.
Finally, we can also extend Theorem 1 to the weighted ATE (WATE) defined as
,
which arises in policy evaluation [AW21]. Here is a uniformly bounded weight function but is not required to be non-negative. The following theorem addresses the minimax structure-agnostic rate for estimating WATE:
Theorem 3. In the same setting as Theorem 1, we have
where is the uniform distribution over . Moreover, this rate is achieved by the DML estimator
Conclusion and discussions
In this blogpost, we introduced the setting and main results of our recent paper [JS24], that establishes the optimality of the celebrated DML algorithm, and in particular the doubly robust estimator with sample splitting, in a structure-agnostic framework for two important causal parameters: the ATE and the ATT, as well as the weighted version of the former. For practitioners, the main takeaway is that if no particular structural insights are available, then it might be better to use DML rather than more refined estimators that leverage potentially brittle assumptions on the non-parametric components of the data generating process.
[AW21] Susan Athey and Stefan Wager. Policy learning with observational data. Econometrica 89.1 (2021): 133-161.
[BKW23] Sivaraman Balakrishnan, Edward H Kennedy, and Larry Wasserman. The fundamental limits of structure-agnostic functional estimation. arXiv preprint arXiv:2305.04116, 2023.
[CCD+17] Victor Chernozhukov, Denis Chetverikov, Mert Demirer, Esther Duflo, Christian Hansen, and Whitney Newey. Double/debiased/neyman machine learning of treatment effects. American Economic Review,107(5):261–265, 2017.
[JS24] Jikai Jin and Vasilis Syrgkanis. Structure-agnostic Optimality of Doubly Robust Learning for Treatment Effect Estimation. arXiv preprint arXiv:2402.14264, 2024.
[KBRW22] Edward H Kennedy, Sivaraman Balakrishnan, James M Robins, and Larry Wasserman. Minimax rates for heterogeneous causal effect estimation. The Annals of Statistics 52.2 (2024): 793-816.
[RLM17] James M Robins, Lingling Li, and Rajarshi Mukherjee. Minimax estimation of a functional on a structured high-dimensional model. The Annals of Statistics, 45(5):1951–1987, 2017.
[RRZ94] James M Robins, Andrea Rotnitzky, and Lue Ping Zhao. “Estimation of regression coefficients when some regressors are not always observed.” Journal of the American Statistical Association 89.427 (1994): 846-866.
[LR11] M.J. Van der Laan and Sherri Rose. Targeted learning: causal inference for observational and experimental data (Vol. 4). New York: Springer.
[AP09] Joshua D. Angrist, and Jörn-Steffen Pischke. Mostly harmless econometrics: An empiricist’s companion. Princeton university press, 2009.
[CHK+24] Victor Chernozhukov, Christian Hansen, Nathan Kallus, Martin Spindler, Vasilis Syrgkanis (2024). Applied Causal Inference Powered by ML and AI.