Can Democracy and Free Markets Survive in the Coming Age of AI? – Wall Street Journal

Given all the data that can be gathered by smartphones and sensors, with more to come, the Economist asks in a recent issue whether artificial-intelligence systems could one day replace the autonomous choices on which the market is based?...And if technology can outperform the invisible hand in the economy, might it be able to do the same at the ballot box when it comes to politics?

These questions were raised and tested throughout the 20th century as various governments attempted to redesign their economies and societies in accord with what were believed to be scientific laws. Most such schemes, especially those carried out by authoritarian states, ended up as complete failures. Some went tragically awry, including Stalins collectivization of agriculture in the Soviet Union and Maos Great Leap Forward in China.

Could things be different in the age of AI? Given the proliferation of mobile and internet of Things devices, the next few decades promise to make information as ubiquitous as electricity. The amount and variety of data gathered around the world will continue to grow by leaps and bounds, as will the power and sophistication of the computers and algorithms used to analyze it all.

Last centurys rivalry with the Soviet Union and its communist ideology has been replaced by a rivalry with China and its AI-based central planning. How will such AI-based planning likely work out? The Economist essay references the work of George Washington University professor Henry Farrell, who explored this question in a recent article on the Crooked Timber blog.

The collective wisdom emerging in Washington and other capitals is that China is becoming a kind of all-efficient Technocratic Leviathan thanks to the combination of machine learning and authoritarianism, writes Mr. Farrell.

Central planning based on machine learning must overcome two serious challenges, he says.

First, while machine learning can be applied to just about any domain of knowledge, its methods are most applicable to significantly narrower and more specialized problems than those that humans are capable of handling, and there are many tasks for which machine learning is not effective. In particular, as were frequently reminded, correlation does not imply causation.

Machine learning is a statistical modeling technique, like data mining and business analytics. It finds and correlates patterns between inputs and outputs without necessarily capturing their cause-and-effect relationships. It excels at solving problems in which a wide range of potential inputs must be mapped onto a limited number of outputs; large data sets are available for training the algorithms; and the problems to be solved closely resemble those represented in the training data, e.g., image and speech recognition, language translation. But deviations from these assumptions can lead to poor results. This is clearly the case when attempting to apply machine learning to highly complex and open-ended problems like markets and human behavior.

The second major challenge is that machine learning can serve as a magnifier for existing errors and biases in the data. Garbage in, garbage out applies as much to AI today as it has to computing since its early years. Given that AI algorithms are trained using the vast amounts of data collected over the years, if the data include past racial, gender or other biases, the predictions of these AI algorithms will reflect these biases.

When this data is then used to make decisions that may plausibly reinforce those processes (by singling e.g. particular groups that are regarded as problematic out for particular police attention, leading them to be more liable to be arrested and so on), the bias may feed upon itself, Mr. Farrell writes.

In more open, free-market democratic societies there will always be ways for people to point out and counteract these biases, he says, but in more centrally managed, autocratic societies the correction tendencies will be weaker.

In short, there is a very plausible set of mechanisms under which machine learning and related techniques may turn out to be a disaster for authoritarianism, reinforcing its weaknesses rather than its strengths, by increasing its tendency to bad decision making, and reducing further the possibility of negative feedback that could help correct against errors, Mr. Farrell concludes.

Irving Wladawsky-Berger worked at IBM from 1970 to 2007, and has been a strategic adviser to Citigroup, HBO and Mastercard and a visiting professor at Imperial College. He's been affiliated with MIT since 2005, and is a regular contributor to CIO Journal.

Original post:
Can Democracy and Free Markets Survive in the Coming Age of AI? - Wall Street Journal

Related Posts