# Double machine learning

Interesting paper by some fairly serious MIT-related econometrics/statistics people (let’s call this paper DML) on how to estimate some low-dimensional parameters of interest in the presence of high-dimensional nuisance parameters. The intuition is quite nice, though the actual theoretical results are maybe not as generally applicable as they might appear at first.

### I. Setup and results

- we want to estimate some finite-dimensional parameter
- there is some very complicated nuisance parameter that we don’t care about
- we’ll want to use fancy machine learning stuff to deal with

- the true value of and are and
- we have some score function that is zero only at the true parameter values: where data is generated IID from some distribution , and denote the expectation wrt
- for a concrete example, think about the case of a simple linear regression:
- would be the derivative of the least squares objective
- and would be the coefficients
- \eqref{momentcondition} would be the first order conditions, which hold only at the true parmeter values

DML recommends this procedure:

- use a score that satisfies a
**Neyman orthogonality**condition:- the directional derivitive for every
- with

- often, you can transform your so that it satisfies Neyman orthogonality

- the directional derivitive for every
- do
**cross-fitting**, where you split your data into a main sample and an auxiliary sample, and:- use the auxiliary sample to train some fancy machine learning model to predict the nuisance parameter
- on the main sample:
- use this trained model to get
- then plug this into the empirical analogue of \eqref{momentcondition} and solve for , so that implicitly defines (where here denotes averaging over the main sample)

- this will then be -consistent for

These two things (Neyman orthogonality and cross-fitting) are what gives us -consistency:

- Neyman orthogonality makes insensitive to the nuisance parameter near the truth, so that plugging in an estimate of won’t hurt too much.
- Cross-fitting (as opposed to using the same data for both steps above) gives us a better estimate of , since the main sample where we plug in to and then estimate isn’t use when training the fancy ML model for predicting and thus on the sample shouldn’t suffer from much overfitting.

We’ll give a slightly deeper intuitive understanding for where these two conditions come from, as well as some considerations on how applicable this might be in practice.

### II. Intuition

#### II.1. First, without the nuisance parameter:

If we knew the true value of the nuisance parameter , we could just plug it into the empirical analogue of \eqref{momentcondition} and solve for to get an estimate :

For sufficiently large , will be pretty close to and will be approximately linear, so we can get the asymptotic distribution of :

where and .

That is, assuming we know the true nuisance parameter , our estimator is -consistent, which is pretty nice.

#### II.2. Now, with the nuisance parameter:

Unfortunately:

- we don’t know
- we still need to estimate
- we’d still like to get this same -consistency

To estimate in this case, probably the most natural thing to do is:

- get a preliminary estimate for of
- pretend , and then do the same thing we did before where we solve for using the empirical analogue of \eqref{momentcondition}:

Note that so long as this nuisance parameter estimate converges to the truth and is smooth, the argmax wrt of should also be smooth, so we should still get . So we can expand \eqref{empiricalmomentcondition} around :

Basically, this looks a lot like \eqref{expansion0}, except with some s instead of . If we could make this look exactly like \eqref{expansion0} as gets big, then we would get -consistency of even in this case where we don’t know the true and have to plug in .

- the left hand side is easy:
- is just an average
- so it should easily go to as gets big so long as is smooth and converges to
- so the left side of \eqref{expansion1} will look like the left side of \eqref{expansion0} as gets big

- the right hand side is more involved:
- in order to make this resemble the right side of \eqref{expansion0}, we need
- can use a stochastic equicontinuity argument if we’re using non-ML methods to get (see Andrews 1994)
- but that argument minimally requires the set of potential have finite VC dimension, whereas in modern ML applications typically we’ll fit increasingly complex functions as sample size increases, so this kind of classical argument doesn’t work

So instead, let’s continue with \eqref{expansion1} and expand the RHS:

where :

- is the directional derivative of wrt in the direction of
- the second derivative in that direction
- we’ve included the second derivative here since it’s not ex-ante obvious that it’ll vanish

In order to make \eqref{expansion2} look like \eqref{expansion0} (and thus get -consistency for ), we just need and to both go to 0. Here’s where Neyman orthogonality and cross-fitting come in:

- the Neyman (near)-orthogonality condition pretty much just amounts to assuming that goes to 0
- Cross-fitting + assumption that on hold-out data gives us going to 0
- the guarantees that will be
- to see this, just consider the case where is 1-dimensional, where this second directional derivative is just
- basically the same thing in general so long as is smooth

- the cross-fitting just means that we estimate on an auxiliary sample that we then don’t re-use when estimating , so that we get the rate for when where we’re plugging in and using it to estimate the main sample

- the guarantees that will be

A bit more intuition:

- that we need a Neyman orthogonality condition seems reasonable:
- we’re plugging in an estimate of the nuisance parameter to stand in for the truth when we estimate
- so if the equation we solve to estimate depends on the value of this nuisance parameter then small errors in the nuisance parameter will mess things up
- Neyman orthogonality just says that this depedence gets small as our data gets big

- estimating on an auxiliary data set separate from the data we use to estimate is basically to just control for overfitting
- if we used the same data for estimating the as , then the expectations in \eqref{expansion2} would all be relative to the data that was estimated on
- as a result, these in-sample estimates of could overfit, and thus might not converge to at the required rate
- as an aside: if we don’t do cross-fitting, but rather find other ways to limit in-sample overfitting, then everything should still be fine
- the DML authors have some other work where they take this approach

### III. Caveats

#### III.1. The set of ML algorithms DML theory applies to

In general, -consistency of requires that . That is, whatever machine learning algorithm you’re using to approximate has to exhibit convergence at this rate. Even in the special case where the second derivative is 0 for all , you still need convergence. The DML paper lists several examples of algorithms / problem settings with existing theoretical results that give this rate, but none of them are really *that* close to things you might do in practice:

- Lasso for sparse models: this applies when the nuisance parameter is some linear function of a sparse set of parameters, which is a bit of an implausible assumption most of the time.
- Neural networks: the Chen White neural network result is for shallow feedforward networks.
- Boosting: the Luo Spindler result assumes the truth is a sparse linear model, and the base learning in the boosting algorithm here are univariate linear regressions, which is quite different from the popular tree boosting stuff that people often do in practice
- Trees/forests: the Wager Walther concentration result applies to convergence of a trained tree to the best theoretical tree with the same splits, and doesn’t say anything about approximating some true conditional mean.

Also, I assume the Wager Athey random forest result is not mentioned because that convergence result is pointwise rather than .

So, DML doesn’t actually provide much in the way of theoretical guarantees for estimating the nuisance parameter via ML methods that provide actually competitive predictive performance, e.g. tree boosting / random forests / deeper neural networks, since the necessary convergence results don’t yet exist for these methods.

#### III.2. The Neyman orthogonalization procedure

The DML procedure requires that we have some score that satisfies Neyman orthogonality. This is generically not the case, so we would like to have some way of transforming an arbitrary into something that satisfies Neyman orthogonality. The DML paper provides illustrations of how to do this for a variety of different cases, and applies them to specific leading examples that economists find interesting. However, it’s not entirely clear how generalizable these procedures are, especially when is a complicated thing we’re approximating by general ML methods.

For example, the ‘concentrating-out’ approach for Neyman orthogonalization in the case of M-estimation with infinite-dimensional nuisance parameters (section 2.23 in the DML paper) requires computing the optimal nuisance parameter as a function of the parameter of interest , and then computing the derivative wrt of this function-valued mapping from to optimal . This approach is then applied to the leading example of interest (partially linear models with nuisance parameter) where the mapping from to optimal is fairly easy to compute and differentiate, due to the particular functional forms involved. It seems like in most cases that are less straightforward, it’s not going to be so easy / possible to do this.

The DML paper presents a variety of other ways to orthogonalize stuff, and works out some more economics-relevant examples. However, in general cases with high-dimensional nuisanace parameters, it doesn’t appear that there’s a mechanical way to orthogonalize the score. As a result, it may be necessary to manually orthogonalize before the DML procedure can be applied.

### References

- Double/Debiased Machine Learning for Treatment and Causal Parameters
- Andrews 1994 Asymptotics for Semiparametric Econometric Models via Stochastic Equicontinuity
- Luo Spindler linear boosting convergence rate
- Wager Walther tree concentration rate
- Chen White neural network convergence rate
- Wager Athey pointwise asymptotic normality of random forests