Decomposition analysis to identify intervention targets for reducing disparities

There has been considerable interest in using decomposition methods in epidemiology (mediation analysis) and economics (Oaxaca-Blinder decomposition) to understand how health disparities arise and how they might change upon intervention. It has not been clear when estimates from the Oaxaca-Blinder decomposition can be interpreted causally because its implementation does not explicitly address potential confounding of target variables. While mediation analysis does explicitly adjust for confounders of target variables, it does so in a way that entails equalizing confounders across racial groups, which may not reflect the intended intervention. Revisiting prior analyses in the National Longitudinal Survey of Youth on disparities in wages, unemployment, incarceration, and overall health with test scores, taken as a proxy for educational attainment, as a target intervention, we propose and demonstrate a novel decomposition that controls for confounders of test scores (measures of childhood SES) while leaving their association with race intact. We compare this decomposition with others that use standardization (to equalize childhood SES alone), mediation analysis (to equalize test scores within levels of childhood SES), and one that equalizes both childhood SES and test scores. We also show how these decompositions, including our novel proposals, are equivalent to causal implementations of the Oaxaca-Blinder decomposition.

Comments: John Jackson is Assistant Professor in the Departments of Epidemiology and Mental Health at the Johns Hopkins Bloomberg School of Public Health and Tyler VanderWeele is Professor in the Departments of Epidemiology and Biostatistics at the Harvard T.H. Chan School of Public Health. Correspondence to john.jackson@jhu.edu

Similar Publications

This article reviews the application of advanced Monte Carlo techniques in the context of Multilevel Monte Carlo (MLMC). MLMC is a strategy employed to compute expectations which can be biased in some sense, for instance, by using the discretization of a associated probability law. The MLMC approach works with a hierarchy of biased approximations which become progressively more accurate and more expensive. Read More


The multivariate linear regression model is an important tool for investigating relationships between several response variables and several predictor variables. The primary interest is in inference about the unknown regression coefficient matrix. We propose multivariate bootstrap techniques as a means for making inferences about the unknown regression coefficient matrix. Read More


In the probabilistic topic models, the quantity of interest---a low-rank matrix consisting of topic vectors---is hidden in the text corpus matrix, masked by noise, and Singular Value Decomposition (SVD) is a potentially useful tool for learning such a low-rank matrix. However, the connection between this low-rank matrix and the singular vectors of the text corpus matrix are usually complicated and hard to spell out, so how to use SVD for learning topic models faces challenges. We overcome the challenge by revealing a surprising insight: there is a low-dimensional $\textit{simplex}$ structure which can be viewed as a bridge between the low-rank matrix of interest and the SVD of the text corpus matrix, and which allows us to conveniently reconstruct the former using the latter. Read More


Current statistical inference problems in areas like astronomy, genomics, and marketing routinely involve the simultaneous testing of thousands -- even millions -- of null hypotheses. For high-dimensional multivariate distributions, these hypotheses may concern a wide range of parameters, with complex and unknown dependence structures among variables. In analyzing such hypothesis testing procedures, gains in efficiency and power can be achieved by performing variable reduction on the set of hypotheses prior to testing. Read More


The ensemble Kalman filter (EnKF) is a computational technique for approximate inference on the state vector in spatio-temporal state-space models. It has been successfully used in many real-world nonlinear data-assimilation problems with very high dimensions, such as weather forecasting. However, the EnKF is most appropriate for additive Gaussian state-space models with linear observation equation and without unknown parameters. Read More


Variable clustering is one of the most important unsupervised learning methods, ubiquitous in most research areas. In the statistics and computer science literature, most of the clustering methods lead to non-overlapping partitions of the variables. However, in many applications, some variables may belong to multiple groups, yielding clusters with overlap. Read More


We study the problem of testing for structure in networks using relations between the observed frequencies of small subgraphs. We consider the statistics \begin{align*} T_3 & =(\text{edge frequency})^3 - \text{triangle frequency}\\ T_2 & =3(\text{edge frequency})^2(1-\text{edge frequency}) - \text{V-shape frequency} \end{align*} and prove a central limit theorem for $(T_2, T_3)$ under an Erd\H{o}s-R\'{e}nyi null model. We then analyze the power of the associated $\chi^2$ test statistic under a general class of alternative models. Read More


A new recalibration post-processing method is presented to improve the quality of the posterior approximation when using Approximate Bayesian Computation (ABC) algorithms. Recalibration may be used in conjunction with existing post-processing methods, such as regression-adjustments. In addition, this work extends and strengthens the links between ABC and indirect inference algorithms, allowing more extensive use of misspecified auxiliary models in the ABC context. Read More


We provide compact algebraic expressions that replace the lengthy symbolic-algebra-generated integrals I6 and I8 in Part I of this series of papers [1]. The MRSE entries of Part I, Table 4.3 are thus updated to simpler algebraic expressions. Read More


Distributional approximations of (bi--) linear functions of sample variance-covariance matrices play a critical role to analyze vector time series, as they are needed for various purposes, especially to draw inference on the dependence structure in terms of second moments and to analyze projections onto lower dimensional spaces as those generated by principal components. This particularly applies to the high-dimensional case, where the dimension $d$ is allowed to grow with the sample size $n$ and may even be larger than $n$. We establish large-sample approximations for such bilinear forms related to the sample variance-covariance matrix of a high-dimensional vector time series in terms of strong approximations by Brownian motions. Read More