*partial*optimum, rather than a full global optimum.

Suppose you have a marble in a bowl on a table, and you want to figure out where it goes. Roughly speaking, you expect it to seek to lower its potential energy. Usually, though, it will go toward the middle of the bowl, even though it would get a lower potential energy by jumping out of the bowl onto the floor. Quantum systems tend to "do better" at finding a global minimum than classical systems; liquid helium, in its superfluid state, will actually climb out of bowls to find lower potential energy states. Even quantum systems, though, often end up more or less in states where the first-order conditions are satisfied, rather than the actual global maximization problem. This is perhaps most elegantly achieved with path-integrals; you associate a quantum mechanical amplitude with each point in your state space, make it undulate as a function of the optimand, and integrate it, and where the optimand isn't constant it cancels itself out, leaving only the effect of its piling up where the optimand satisfies the first-order conditions.

In economics and game theory, "equilibrium" will typically maximize agents' utility functions, each subject to variation only in the corresponding agent's choice variables; externalities are, somewhat famously, left out. I'm tempted to try to apply a path-integral technique, but in game theory in particular the optimum is often at a "corner solution" where a constraint binds, and where the optimand doesn't therefore satisfy usual first-order conditions. Something complicated with lagrange multipliers might be a possibility, but I suspect the use of (misleadingly named) "smoothed utility functions" will effectively do the same thing, but more easily. I might then try to integrate "near" an equilibrium, but only in the dimensions corresponding to one particular agent's choice variables.

I wonder whether I can make something useful of that.