An interview with Brett Frischmann, co-author of “Re-Engineering Humanity”


“We become what we behold. We shape our tools and then our tools shape us.” This truism—by the media-scholar John Culkin about the work of Marshall McLuhan—is more potent than ever in the age of data and algorithms. The technology is having a profound effect on how people live and think.

Some of those changes are documented in “Re-Engineering Humanity” by two technology thinkers from different academic backgrounds: Brett Frischmann is a law professor at Villanova University in Pennsylvania and Evan Selinger teaches philosophy at Rochester Institute of Technology in New York.

Together, they explore how ordinary activities like clicking on an app’s legal terms are made so simple that it “trains” us to not read the contents. Over time, the authors fear that humans will lose their capacity for judgment, discrimination and self-sufficiency. Or, as Douglas Rushkoff, a tech writer, put it: “We should be less scared of robots than of becoming more robotic ourselves.”

The Economist’s Open Future initiative asked Mr Frischmann five questions about these dyspeptic themes.

The Economist: How is technology “re-engineering” humanity?

Brett Frischmann: Human civilisations have re-engineered humanity for millennia. “Humanity” is who we are, and are capable of being, within our built world. It’s reflected in the world we’re building for ourselves, our children, and future generations. Technology re-engineers humanity in part by affecting human capabilities and in part by shaping and constituting our values, beliefs, and shared commitments.

Our book is about how digital networked technologies coupled with sophisticated social engineering are re-engineering our world and humanity. Like the proverbial frogs in slowly warming water, we’re gradually being led to accept a world governed by supposedly smart tech. For the sake of convenience and cheap bliss, we surrender ourselves, follow scripts, and risk becoming indistinguishable from simple machines.

The Economist: Is it possible to live free of the ubiquitous digital technologies and algorithms that track and influence us? If not, can we really be free?

Mr Frischmann: Nothing is inevitable besides entropy. It’s possible to live free and in diverse ways. It’s increasingly more difficult, however, to leave digital technology aside for significant portions of one’s life; it may require sacrifices that are unbearable for many. The technological, social, economic, educational, political, and cultural systems that many people rely on are interconnected and heavily reliant on digital tech. We need systemic change so that we can live free.

We can, however, find times and spaces within our lives to be free. A first step toward such freedom is to begin looking for opportunities. Then decide for yourself. Just as we teach children to resist peer pressure, we must learn to resist techno-social pressures.

The Economist: One concern is electronic-contracts, which you argue shape human behaviour in troubling ways, and should be reformed. Explain the problem and your solution.

Mr Frischmann: In theory, contract law enables and ought to enable people, first, to exercise their will freely in pursuit of their own ends and, second, to relate to others freely in pursuit of cooperative ends. In practice, electronic contracting threatens autonomy and undermines the development of meaningful relationships built on trust. Optimised to minimise transaction costs, maximise efficiency, minimise deliberation, and engineer complacency, the electronic contracting architecture nudges people to click a button and behave like simple stimulus-response machines.

To recover contract law’s core social functions, we advocate ruling out automatic contracts and favouring contracts based on some degree of deliberation and meaningful relationships; we advocate cutting off hidden side-agreements that perpetuate side-deals in multi-sided markets where the consumer is reduced to a resource to be mined, bought and sold.

The Economist: You advocate building in transactions costs, obfuscation and “seams” within digital systems. How would that help people?

Mr Frischmann: Seamless and friction-free are great optimisation criteria for machines, not for humans. After all, machines are tools that serve human ends. Machines don’t set their objectives. Humans do, or so we hope. To author our lives and not just perform scripts written by others, we need to sustain our freedom to be off—to be free from powerful techno-social engineering scripts. Our proposals help protect such freedom and provide space/opportunities for people to develop capabilities essential to human flourishing.

Flourishing humans need some friction. Friction is resistance. It slows things down. We need opportunities to stop and think, to deliberate and even second-guess ourselves and others. This is how we develop the capacity for self-reflection; how we experiment, learn and develop our own beliefs, tastes, and preferences; how we exercise self-determination. This is free will in action. We’re social beings; meaningful relationships require friction too. It’s how we get to know each other and build trust. Seams are also critical for trusted governance, which is sorely lacking in our digital networked environment.

The Economist: The book turns the Turing Test on its head. Explain your version and why it’s needed.

Mr Frischmann: Alan Turing proposed a test to examine whether a machine can think.  He scrutinised the line between humans and machines, focusing on the machine side of the line.  We examine the human side, use machines as a baseline, and ask when and how humans behave in a machine-like manner.  We begin with different intelligence tests, but extend our analysis to different capacities, such as how we relate to others, as well as the core concepts of free will and autonomy.  Our tests are plausible empirical tests and, more importantly, conceptual tools to examine what makes us human and how our humanity is reflected in and affected by the technologies we develop and use.

Many claim this or that tech is dehumanising; such claims are untestable without a baseline. Turing inspired us to use machines as a baseline. Turing tested “humanisation” of machines along a specific dimension—conversational intelligence via text messaging. In a sense, we test “machinisation” of humans along various dimensions. Like Turing, our tests follow a two-step procedure. First, run an experiment—an empirical or thought experiment—to determine if, in some context, humans are behaving like simple machines. If so, pause and look closer at what techno-social engineering is doing to us. The first step is observational; the second step is evaluative.

Find more about Open Future here