"Everything else the same" is approximately the idea behind the partial derivative from multivariate calculus. If I have a function f(x,y), and I ask for ∂f/∂x, I'm asking for the change in f in response to a small change in x with y held constant. Thermodynamics, in physics, is the only context in which I've seen the ceteris paribus problem formally attended to; for an ideal gas, for example, PV=nRT, and entropy can be written as a function of any two of P, V, and T; specific heat at constant volume C

_{V}is T(∂S/∂T)

_{V}, that is to say with volume constant and P changing as T changes, while C

_{P}=T(∂S/∂T)

_{P}is the corresponding expression for constant pressure (with V changing linearly with T). The two can be related; C

_{P}-C

_{V}=VT(α

^{2}/κ) where α=(1/V)(∂V/∂T)

_{P}and κ=-(1/V)(∂V/∂ P)

_{T}are the volume coefficient of expansion and the isothermal compressibility of the gas, respectively. This is related to the Slutsky equation from microeconomics, but the typical notation there serves to obscure rather than elucidate the fundamental point that partial derivatives can't hold "everything else" fixed.

Anyway, economists frequently look at statistical relationships between variables and seek to tease out causal relationships; if increases in investment tend to be followed by economic growth, for example, one might suppose that the growth is caused by the increase in investment, but it seems logical that investors would invest more when they expect economic growth than when they don't, so even the time-ordering in this case may run reverse to the causal relationship.

^{*}Of course, it's possible that both causal relationships exist, or even that neither does; perhaps both are being driven by something else. The data do not — indeed, no set of data

*can*— make a distinction between causal explanations. In order to tease out causal effects econometricians will employ "instrumental variables", but even in that case instruments are useful at implying causation because there is an outside theoretical reason to believe that they bring in a causal relationship themselves; without outside theory, simply throwing more variables at the problem can not give causation.

At this point, then, it is worth asking what causality means. Perhaps something causes us to expect an increase in investment that isn't connected to other changes we usually expect to affect GDP, so we predict an increase in GDP, or perhaps there's an increase in GDP forecasts that is not connected with other things we expect to drive investment, so we expect an increase in investment. Presumably this means we believe something is happening that wasn't happening before — something we expect to change the data-generating process. From that standpoint, causal interpretations of empirical data affect out-of-sample forecasts based on previous observations.

Colloquially, causality is often connected with intention. If increases in investment cause later increases in GDP, perhaps we can increase investment, and that will increase GDP. If this is the case, perhaps it matters

*how*"we" — presumably action by the government or some other relatively unitary actor — go about increasing investment. Formally, perhaps there is a set of choice variables available to "us", and the DGP is presumed to be a function of those choice variables; here, again, if we're looking to ascertain the distribution of variables effected by a set of choices for which we have no empirical data from empirical data we do have (for other values of our choice variables), this is essentially again out-of-sample prediction, and the extrapolation is again going to have to be theory-driven.

So this is how I'm currently thinking about causality, at least from an econometric point of view: Causality is simply about the ways in which I think an underlying data generating process is likely to change, and those ways in which it is not. (I may think differently next month.)

^{*}Indeed, Le Chatlier noted that near a locally stable equilibrium, effects tend to inhibit their causes; if both take place with a lag, you may see a diagonal time/variable cross-correlation of either sign at various lags, as one causal relationship outweighs the other.

## 1 comment:

I see a study that shows that, while HDL cholesterol has proved in the past a good monitor on heart health, that manipulating it directly does not seem to improve heart health. Perhaps this provides a further useful elucidation of these issues.

Logically, "A implies B" is the same as "either B or not A", and this is largely what "A causes B" feels like; it's at least related. So what's important here is that it is possible to generate a rise in HDL cholesterol without an improvement in heart health. A causal relationship, at least if it were of the unconfounded, iron clad sort that we always wish for but never happens, would forbid such a process; for any set of external variables, the probability of both a high level of HDL cholesterol and poor heart health should be low compared to other relevant probabilities.

Post a Comment