On the problems of planning for chaotic/complex futures

Perhaps reflecting this week’s theme about the confusing pasts and future of sports team, and reflecting Dan Gardner’s writing about the perils of futurology, sociology blogger Daniel Little has at his blog Understanding Society a “review of an interesting book which examines the problems of decision-making in complex systems.

How should we make intelligent decisions in contexts in which the object of choice involves the actions of other agents whose choices jointly determine the outcome and where the outcome is unpredictable? Robert Axelrod and Michael Cohen address these issues in Harnessing Complexity: Organizational Implications of a Scientific Frontier. They define a complex adaptive system in something like these terms: a body of causal processes and agents whose interactions lead to outcomes that are unpredictable. So the interactions among agents often have unpredictable consequences; and the agents themselves adapt their behavior based on past experiences: “They interact in intricate ways that continually reshape their collective future.” Here is how Axelrod and Cohen put their question:

In a world where many players are all adapting to each other and where the emerging future is extremely hard to predict, what actions should you take? (xi)

This book is about designing organizations and strategies in complex settings, where the full consequences of actions may be hard — even impossible — to predict. (2)

Complexity and chaos are often used interchangeably; but Axelrod and Cohen distinguish sharply between them in these terms:

Chaos deals with situations such as turbulence that rapidly become highly disordered and unmanageable. On the other hand, complexity deals with systems composed of many interacting agents. While complex systems may be hard to predict, they may also have a good deal of structure and permit improvement by thoughtful intervention. (xv)

Here is a simple current example — an assembly of 1000 Egyptian citizens in January 2011, interested in figuring out what to do in light of their longstanding grievances and the example of Tunisia. Will the group erupt into defiant demonstration or dissolve into private strategies of self-preservation? The dynamics of the situation are fundamentally undetermined; the outcome depends on things like who speaks first, how later speakers are influenced by earlier speakers, whether the PA system is working adequately, which positions happen to have a critical mass of supporters, the degree to which the government can make credible threats of retaliation, the presence of experienced organizers, and a dozen other factors. So we cannot predict whether this group will move towards resistance or accommodation, even when we assume that all present have serious grievances against the Egyptian state.

This all has critical implications for planning. It’s difficult to come up with plans when your plans have uncertain results.

Decision theorists distinguish between situations of parametric rationality and strategic rationality. In the former the decision maker is playing against nature, with a fixed set of probabilities and causal properties; in the latter the decision maker is playing against and with other rational agents, and the outcome for each depends upon the choices made by all. Game theory offers a mathematical framework for analyzing strategic rationality, while expected utility theory is advanced as a framework for analyzing the problem of choice under risk and uncertainty. The fundamental finding of game theory is that there are equilibria for multi-person games, both zero-sum and non-zero-sum, for any game that can be formulated in the canonical game matrix of agents’ strategies and joint outcomes. Whether those equilibria are discoverable for ordinary strategic reasoners is a separate question, so the behavioral relevance of the availability of an equilibrium set of strategies is limited. And here is the key point: neither parametric rationality nor equilibrium-based strategic rationality helps much in the problem of decision-making within a complex adaptive system.

The situation that Axelrod and Cohen describe here is an instance of strategic rationality, but it doesn’t yield to the framework of mathematical game theory. This is because we can’t attach payoffs to combinations of strategies for the separate agents; this follows from the unpredictability assumption built into the idea of complexity. And, second, complex adaptive systems are usually in a dynamic process of change, so that the system never attains an equilibrium state.

The best-case scenario, the authors suggest, involves careful variations on existing strategies and experimentation. But is that good enough?

Go, read.

Advertisements
This entry was posted in Uncategorized and tagged , , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s