Previous chapters focused on identifying and estimating causal effects under the assumption that all relevant confounding variables Z influencing both treatment T and outcome Y are observed. However, in many practical machine learning applications, this assumption is often violated. Unmeasured confounders or systematic differences between the groups being compared (selection bias) can lead to incorrect causal conclusions.
This chapter addresses this fundamental challenge. We will examine several advanced techniques specifically designed to estimate causal effects when critical variables are unobserved. You will learn about:
By the end of this chapter, you will understand the principles behind these methods and be able to apply them to mitigate bias arising from unobserved factors in your causal analyses.
4.1 Advanced Instrumental Variables (IV) Methods
4.2 Deep Learning and Kernel Methods for IV
4.3 Regression Discontinuity Designs (RDD)
4.4 Difference-in-Differences (DiD) with Panel Data
4.5 Adapting Selection Bias Correction Methods
4.6 Proximal Causal Inference Principles
4.7 Hands-on Practical: IV and RDD Analysis
© 2025 ApX Machine Learning