A Psychologist Explains How AI and Algorithms Are Changing Our Lives
Behavioral scientist Gerd Gigerenzer has spent decades studying how people make choices. Here’s why he thinks too many of us are now letting AI make the decisions.
A Psychologist Explains How AI and Algorithms Are Changing Our Lives
Behavioral scientist Gerd Gigerenzer has spent decades studying how people make choices. Here’s why he thinks too many of us are now letting AI make the decisions.
In an age of ChatGPT, computer algorithms and artificial intelligence are increasingly embedded in our lives, choosing the content we’re shown online, suggesting the music we hear and answering our questions.
These algorithms may be changing our world and behavior in ways we don’t fully understand, says psychologist and behavioral scientist Gerd Gigerenzer, the director of the Harding Center for Risk Literacy at the University of Potsdam in Germany. Previously director of the Center for Adaptive Behavior and Cognition at the Max Planck Institute for Human Development, he has conducted research over decades that has helped shape understanding of how people make choices when faced with uncertainty.
In his latest book, “How to Stay Smart in a Smart World,” Dr. Gigerenzer looks at how algorithms are shaping our future—and why it is important to remember they aren’t human. He spoke with the Journal for The Future of Everything podcast.
The term algorithm is thrown around so much these days. What are we talking about when we talk about algorithms?
It is a huge thing, and therefore it is important to distinguish what we are talking about. One of the insights in my research at the Max Planck Institute is that if you have a situation that is stable and well defined, then complex algorithms such as deep neural networks are certainly better than human performance. Examples are [the games] chess and Go, which are stable. But if you have a problem that is not stable—for instance, you want to predict a virus, like a coronavirus—then keep your hands off complex algorithms. [Dealing with] the uncertainty—that is more how the human mind works, to identify the one or two important cues and ignore the rest. In that type of ill-defined problem, complex algorithms don’t work well. I call this the “stable world principle,” and it helps you as a first clue about what AI can do. It also tells you that, in order to get the most out of AI, we have to make the world more predictable.
So after all these decades of computer science, are algorithms really just still calculators at the end of the day, running more and more complex equations?
What else would they be? A deep neural network has many, many layers, but they are still calculating machines. They can do much more than ever before with the help of video technology. They can paint, they can construct text. But that doesn’t mean that they understand text in the sense humans do.
NEWSLETTER SIGN-UP
The Future of Everything
A look at how innovation and technology are transforming the way we live, work and play.
Does being able to understand how these algorithms are making decisions help people?
Transparency is immensely important, and I believe it should be a human right. If it is transparent, you can actually modify that and start thinking [for] yourself again rather than relying on an algorithm that isn’t better than a bunch of badly paid workers. So we need to understand the situation where human judgment is needed and is actually better. And also we need to pay attention that we aren’t running into a situation where tech companies sell black-box algorithms that determine parts of our lives. It is about everything including your social and your political behavior, and then people lose control to governments and to tech companies.
You write that “digital technology can easily tilt the scales toward autocratic systems.” Why do you say that? And how is this different from past information technologies?
This kind of danger is a real one. Among all the benefits it has, one of the vices is the propensity for surveillance by governments and tech companies. But people don’t read privacy policies anymore, so they don’t know. And also the privacy policies are set up in a way that you can’t really read them. They are too long and complicated. We need to get control back.
SHARE YOUR THOUGHTS
How do algorithms influence your day-to-day?
So then how should we be smart about something like this?
Think about a coffee house in your hometown that serves free coffee. Everyone goes there because it is free, and all the other coffee houses get bankrupt. So you have no choice anymore, but at least you get your free coffee and enjoy your conversations with your friends. But on the tables are microphones and on the walls are video cameras that record everything you say, every word, and to whom, and send it off to analyze. The coffee house is full of salespeople who interrupt you all the time to offer you personalized products. That is roughly the situation you are in when you are on Facebook, Instagram or other platforms. [Meta Platforms Inc., the parent company of Facebook and Instagram, declined to comment.] In this coffee house, you aren’t the customer. You are the product. So we want to have a coffee house where we are allowed again to pay [for] ourselves, so that we are the customers.
We’ve seen this whole infrastructure around personalized ads be baked into the infrastructure of the internet. And it seems like it would take some pretty serious interventions to make that go away. If you’re being realistic, where do you think we’re going to be headed in the next decade or so with technology and artificial intelligence and surveillance?
In general, I have more hope that people realize that it isn’t a good idea to give your data and your responsibility for your own decisions to tech companies who use it to make money from advertisers. That can’t be our future. We pay everywhere else with our [own] money, and that is why we are the customers and have the control. There is a true danger that more and more people are sleepwalking into surveillance and just accept everything that is more convenient.
But it sounds so hard, when everything is so convenient, to read privacy policies and do research on these algorithms that are affecting my life. How do I push back against that?
The most convenient thing isn’t to think. And the alternative is start thinking. The most important [technology to be aware of] is a mechanism that psychologists call “intermittent reinforcement.” You get a reinforcement, such as a “Like,” but you never know when you will get it. People keep going back to the platform and checking on their Likes. That has really changed the mentality and made people dependent. I think it is very important for everyone to understand these mechanisms and how one gets dependent. So you can get the control back if you want.
This interview has been condensed and edited.
Write to Danny Lewis at daniel.lewis@wsj.com
The Future of Everything
More stories from The Wall Street Journal's The Future of Everything, about how innovation and technology are transforming the way we live, work and play.
Copyright ©2022 Dow Jones & Company, Inc. All Rights Reserved. 87990cbe856818d5eddac44c7b1cdeb8
What’s Ahead for Work
This special report explores what’s ahead for work, from Meta’s quieter cubicle to executive jobs with fewer hours
What’s Ahead for Transportation
This special report explores what’s ahead for transportation, from the weird-looking planes that could fly you on vacation to the ways e-bikes could change our cities
What’s Ahead for Health
This special report explores what’s ahead for health, from five ways you’ll exercise smarter to an emerging treatment that helps a torn ACL heal itself
What’s Ahead for Education & Learning
In the latest issue, The Future of Everything explores what’s ahead for education, from the pandemic’s long-lasting impact on a generation of students to new roles for tech in teaching reading.