This is a follow up to my ergodicity post from last week. Both posts are inspired by conversations I had with my Co-Hub-Leader Jean Philippe Bouchaud (for the Rebuilding Macroeconomics Hub: Why are Economies Unstable?) on the role of the ergodicity assumption in science. Content warning: This is more technical than many of my posts with no apologies. It is a technical subject.
I became interested in Chaos Theory in the early 1980s when I attended a conference in Paris organized by Jean Michel Grandmont. Jean Michel had been working on non-linear cycle theories, as had I, and the conference was an opportunity to explore the idea that plain vanilla general equilibrium models with rational agents, each of whom held rational expectations, might lead to complicated dynamic paths for observable variables. As I pointed out here, many of us at the conference were persuaded by the work of Buzz Brock, who argued that even if the economic data live on a complicated non-linear attracting set, we don’t have enough data to establish that fact.
The simplest example of a complicated non-linear attracting set is the tent map displayed in Figure 1. The map F(x) (plotted as the red triangle) maps the [0,1] interval into itself. The map has a steady state at 0 and a steady state at XS but both steady states are unstable. Trajectories that start close to either steady state move away from them. However, all paths that start in the [0,1] interval stay there. The tent map is a perpetual motion machine.
While these facts are interesting, my eventual response was: So What? If you generate random numbers on a computer, those numbers are generated by more sophisticated versions of the tent-map. If we lived in a world where the shocks to GDP were generated by a chaotic deterministic system it should not influence our behaviour. It would simply explain the origin of what we treat as random variables. Data generated by the tent-map have predictable behaviour. They obey statistical laws. If there is a small degree of uncertainty of the value of x at date 1, that uncertainty is magnified the further you move into the future. In the limit, as T gets larger, x(T) is a random variable with an invariant distribution and the best guess of where you would expect to see x(T) as T gets larger is the mean of x with respect to that invariant distribution.
Jean-Philippe introduced me to the work of Philip Anderson, a Nobel Laureate in physics who worked on solid state electronics and has written a series of illuminating posts on phenomena known as spin glasses. Without getting too far into the details, the basic idea is that for a large class of physical phenomena, it is not just the state variables that describe the world that are random. It is the probabilities that those variables will live in any particular state.
Here is a question for all of you out there who have thought about these ideas. Imagine that you are offered a sequence of gambles in which you may bet on the outcome of a coin toss where the coin comes up heads with probability p(t) and tails with probability 1-p(t) where t=1,2,.. and where p(t) is generated by the tent map. Suppose we allocate the value 0 to heads and 1 to tails. I conjecture that, for any finite sample of T coin tosses, the sample mean of the random variable that takes the value 0 with probability p(t) and 1 with probability 1-p(t) does not converge to a number as T gets larger. If the world is like this, and I believe there is a sense in which financial market data are very much like this: What Does it Mean to Have Rational Expectations?