A cry for freedom in the algorithmic age

Humanity must retain the right to reject the network and uphold individuality, says Gaspard Koenig

 

As artificial intelligence makes its way to all areas of life, the most prominent people aiming to explain the technology and translate it for the public tend to be scientists, businessmen and often, Americans. Gaspard Koenig is different. A French philosopher, he runs GenerationLibre, a think-tank that promotes classical liberal values of individual freedom. And he brings vibrant intellectual energy to the debate.

In his latest book, “The End of the Individual: A Philosopher’s Journey to the Land of Artificial Intelligence” (Éditions de l’observatoire, 2019), Mr Koenig argues that society should be cautious about the power of AI not because it will destroy humanity (as some argue) but because it will erode our capacity for critical judgement. He already sees that happening, as people blindly follow algorithmic recommendations, be it to watch a film or use a map. He frets this will only get worse.

What can we do? Mr Koenig defies the mantra of Silicon Valley and believes we should not be afraid to unplug from the network or to stray from the aggregated data that funnel us into a new form of utilitarianism, which presumes that what is best for the majority of consumers is right for us as individuals.

 

The Economist: Will AI erode human freedom or enhance it?

Gaspard Koenig: As a technology AI undoubtedly represents an advancement, which has been in the making for the past 70 years and can now provide tools for personal emancipation, broadening our horizons. Far from replacing human intelligence, which consists of biological mechanisms deeply ingrained in our flesh and blood, it merely automatises the way our own intellectual outputs are processed.

But as AI is deployed commercially today, with deep-learning systems fed by personal data and nudging human behaviours, I find it deeply infantilising. We increasingly feel like pawns governed, willy-nilly, by algorithms using parameters we cannot understand (nor modify, obviously) and issuing recommendations “for our own good”. Peter Thiel goes so far as to say that “AI is communist”. I would argue that it takes the milder form of Tocqueville’s “democratic despotism,” where we have become the despots of ourselves and the slaves of efficiency. This is a deliberate commercial choice, not something pre-destined. Innovations based on the blockchain, among others, could rebalance the technology towards the individual.

 

The Economist: You argue that AI undermines free will. But three centuries ago it was said that science would destroy religion—and that didn’t happen. Perhaps free will will be just fine in the age of AI?

Mr Koenig: Technological revolutions always have deep cultural implications. Take the printing press. It greatly affected the religious dogma by leading the way to the Reformation (and arguably to the Enlightenment). But it also prompted a call for regulation, such as Beaumarchais in 18th-century France campaigning for authorship rights. I fear that today, the effects of AI on society are underestimated. Philosophers and computer scientists need to work together much more.

The rejection of free will is part of an academic consensus forged by experimental psychology, behavioural economics and neurosciences. If we want to get a handle on AI, we have to address this. What really matters is not whether our minds are “pre-determined” but how we maintain the capacity for internal deliberation. Only then will we able to draw regulatory conclusions. As with printing, I believe the key element is the extension of the domain of property rights.

 

The Economist: AI seems to strengthen the dominance of the nation-state relative to the individual. What can be done to narrow this asymmetry of power?

Mr Koenig: AI strengthens the dominance of whoever controls the data. In the West, giant platforms are making the nation-state nearly obsolete. They are becoming the ultimate vehicle of social norms. Isn’t it striking that rules concerning freedom of speech are practically entrusted to social networks, with the consent of powerless governments? Algorithms are eroding the principle of collective deliberation.

In China however, the centralisation of data into public hands reinforces the power of the Communist Party. I was struck to see how explicitly the Chinese tech giants known as the BATX—for Baidu, Alibaba, Tencent and Xiaomi —are working for the government, implementing social-control policies and happily sharing data with the authorities.

As a liberal, I am not satisfied with either model. I want to find ways to redistribute power. And it starts with the re-appropriation of our personal data.

 

The Economist: If we need a new liberalism for an age of AI, what principles would that entail?

Mr Koenig: Silicon Valley is obsessed with utilitarianism. The governing principle of most apps is to maximise the happiness of the maximum number of users. That’s why the concept of “community” is so dear to them: what counts is not to offer the best product to a given client as in the industrial age, but to nudge users in a way that satisfies most of them.

True liberals in the humanist tradition should understand the threats posed to liberty by this paradigm. If Facebook implements its Libra currency, it would become the most powerful entity that ever existed—and should be fought as such. The fascination for buzzwords such as “start-up” or “disruption” seems to anaesthetise our critical thinking. Moreover, the blatant faults of today’s governments lead us to resist public policy in general, thus forgetting that Adam Smith was the founder of political economy. Nearly a century after the Walter Lippmann Conference in 1938, which sought to reorient liberalism amid the depression, it is time to reinvent liberalism again and to recalibrate the state to enable individual autonomy and discourage oligopolies. That should entail, among other things, considerations on a universal basic income.

 

The Economist: You believe we need to establish a property right on personal data. Won’t that just fuel a hyper-financialisation in every corner of human activity?

Mr Koenig: Property rights classically entail three elements: usus, fructus, abusus—that is, the rights to use, profit from and dispose of property. The principle of fructus would allow us to be compensated for the value of our data, thus forcing Facebook and others to pay us for the raw material we provide. This is not so much “financialisation” as it is a fair rebalancing of the economic value chain. We would move from today’s digital feudalism, where the lord gives us free services in exchange for all the data that is harvested, to proper capitalism based on contractual terms. As always, property rights protect the individual against the abuse of central power.

But then there is also usus and abusus. Property rights allow individuals to ignore the market. Nobody forces you to sell your house, even if you underexploit it. The same applies to data: through a personal data wallet, we would decide which data we are willing to share, with whom, to which end and under what conditions. Platforms would have to accept our terms and conditions, not the other way around.

 

Find more about Open Future here