Don’t trust AI until we build systems that earn trust

Posted by economia 27/02/2020 0 Comment(s) The Economist-Open Future,

 

 

 

Progress in artificial intelligence belies a lack of transparency that is vital for its adoption, says Gary Marcus, coauthor of “Rebooting AI”

 

 

 

To judge from the hype, artificial intelligence is inches away from ripping through the economy and destroying everyone’s jobs—save for the AI scientists who build the technology and the baristas and yoga instructors who minister to them. But one critic of that view comes from within the tent of AI itself: Gary Marcus.

 

From an academic background in psychology and neuroscience—rather than computer science—Mr Marcus has long been an AI gadfly. He relishes poking holes in the popular AI technique of deep-learning because of its inability to perform abstractions even as it does an impressive job at pattern-matching. Yet his unease with the state of the art didn’t prevent him from advancing the art with his own AI startup, Geometric Intelligence, which he sold to Uber in 2016.

 

Mr Marcus argues that it would be foolish of society to put too much stock in today’s AI techniques since they are so prone to failures and lack the transparency that researchers need to understand how algorithms reached their conclusions. In classic statistics, the parameters used are determined by people, yet with AI, the system itself decides. Though the techniques work—say, identifying that a cell biopsy is cancerous—it’s unclear why it works. This makes it tricky to deploy AI in areas like medicine, aviation, civil-engineering, the judiciary and so on.

 

This is a point Mr Marcus makes with verve and pith in his latest book, “Rebooting AI” (Pantheon, 2019), which he co-wrote with Ernest Davis. As part of The Economist’s Open Future initiative, we asked Mr Marcus about why AI can’t do more, how to regulate it and what teenagers should study to remain relevant in the workplace of the future. The brief interview appears after an excerpt from the book on the need to build trustworthiness into artificial intelligence.

 

 

An interview with Gary Marcus

 

 

The Economist: Your call for trustworthy AI would seem to entail new rules and institutions, akin to the IATA and the FAA for aviation, or the ITU and FCC for telecoms—abbreviations that basically mean big, institutional bureaucracies on a national and international scale. Is that what you want?

 

We definitely need something. Right now, for example, there are few regulations on what a driverless car company might release; they could be sued after the fact, but they can put on the road essentially anything they like. Drugs are much more regulated, with lengthy trials processes and so forth. Driverless cars could eventually save many lives, but, as Missy Cummings of Duke University has pointed out, until the technology is mature, we need to proceed with the kind of caution we would accord a new drug.

 

In the long term, we're going to have to mandate some set of innate values as well. A domestic robot à la Rosie the Robot, for example, will need to recognise the value of human life before it leaves the factory. We can't just leave values to chance, depending on what some system happens to encounter in the world, and what its so-called “training database” is.

 

 

The Economist: Why is it so hard to build causality and counterfactuals into our AI systems?

 

Current systems are too superficial. The leading mechanisms (eg, deep learning) discern correlation after correlation, from giant data sets, but at a very shallow level. A deep-learning system might learn to recognise an elephant by noticing the textures of the elephant's wrinkly skin, but miss the significance of the trunk. We haven't yet figured out how to build systems that can both learn from large scale databases (as deep learning does) yet also represent rich, articulated knowledge in the ways that earlier "classical AI" approaches sought to do.

 

You can't reason about what an elephant is likely to do if you don't understand what an animal is or what a trunk is. Categorising an image is not just the same thing as reasoning about what would happen if you let an angry elephant loose in Times Square. There's a versatility to human thought that just can't be captured by correlation alone.

 

 

The Economist: Should the fact that the most sophisticated and best-performing AI can't explain how it arrived at its conclusions make us reject it for critical uses (in medicine, power grids, etc)? Or should we tolerate this so long as we can validate that it works well?

 

We shouldn't just be sceptical of current AI because it can't explain its answers, but because it simply isn't trustworthy enough.

 

And validation is actually the key to that: we don't yet have sound tools for validating machine learning; instead, we mostly have something that is closer to a seat-of-the-pants method: we try stuff in a bunch of circumstances and if it works there, we hope that we are good to go—but we often have little guarantee that what works in ordinary circumstances will also work in extraordinary circumstances. An autonomous vehicle may work on highways but fail to make good decisions on a crowded city street during a snowstorm; a medical diagnosis system may work well with common diseases but frequently miss diagnosis for rare diseases.

 

To some extent, the search for "explainable AI" is a bandage on that problem: if you can't make your software perfect, you'd at least like to know why it makes the mistakes that it does in order to debug it. Unless and until we get smarter about building complex cognitive software that can cope with the complexity and variability of the real world, demanding explainability in order to facilitate debugging may be the best we can do.

 

 

The Economist: Jobapocalypse, yes or no—and if so, when? How ought the state and business respond?

 

 

Next decade, not so much; next century: huge impact.

 

The thing to remember about the next decade is that current AI can't even read; it also can't reason. Four years ago, machine-learning pioneer Geoffrey Hinton said that radiologists would lose their jobs in five or ten years; radiologists got really scared, a lot of people stopped studying radiology. So far? Not one radiologist has actually been replaced. Radiology isn't just about looking at images (which deep learning is good at), it's also about reading (patient histories) and reasoning (about how to put words and images together), and machines can't yet do that reliably. What we have now are our new tools that can help radiologists, not radiologists-in-a-box.

 

That said, more powerful forms of AI will eventually emerge, and that point I think it is inevitable that we will have to move towards something like a universal basic income, and a new way of life, in which most people find fulfilment through creative endeavours rather than employment.

 

 

The Economist: Is there anything that you think AI will never be able to do, that humans will always be able to do well—and thus give humans an edge in society?

 

 

I wouldn't count on it. Paraphrasing the late musician Prince, never is a mighty long time. Human brains are the most flexible machines for thinking that is currently found on the planet, but (as I explained in an earlier book, “Kluge”), a long way from perfection; no physical law prevents us from building machines with minds as powerful as the human mind, nor from building machines with minds more powerful and more reliable than anything biology has thus far developed.

 

On the other hand, I don't foresee machines trying to get an "edge" over us; they do what they do because they follow the programs that guide them. Unless something fundamental changes, we will still be in charge, even if our machines exceed us in pure cognitive capacity. Calculators are better in arithmetic than people are, but thus far have shown no interest whatsoever in taking over society.

 

 

The Economist: What would you tell a 15-year-old to study, to be relevant in the workplace of the future?

 

Learn how to learn. Creativity, on-the-fly learning, and critical thinking skills are going to be what matters. Your grandchildren may live in a world without work, but you won't; instead you will live in a world that is rapidly changing. Whatever you choose to do, learning to code will be a very valuable skill, even if you don't do it for a living, because knowing the basic logic of machines will be critical in thriving in our society as it adapts to the ever-growing powers of machines.

 

 

 

Find more about Open Future here

Leave a Comment

 .