Richard Edelman, Artificial Intelligence
Richard Edelman at the firm’s downtown Seattle office on Feb. 28, 2024. (GeekWire Photo / Todd Bishop)

A new report points to a crisis of trust in innovation, and the risk that rapid technological change — especially in the field of artificial intelligence — will fuel increased populism and polarization across societies.

Richard Edelman, CEO of Edelman, discussed these and other findings from the 2024 Edelman Trust Barometer during visits last week with the global communications firm’s clients in tech-heavy Seattle and San Francisco.

His message, as he explained in a blog post this week, was that “acceptance of innovation cannot be taken for granted, that we must spend much more of our time on adaptation and education, not just on R&D.”

  • More than 75% of those surveyed for the report expressed trust in the tech industry, while trust in artificial was 25 points lower, at 50%.
  • Eight years ago, technology was the leading industry in trust in 90% of the countries the firm studies. Now it is most trusted in half of those countries.
  • Trust in AI companies has declined from 61% to 53% in the past five years.

Edelman says this trend illustrates the risks to the technology industry from what he calls its “head-long jump into artificial intelligence.”

GeekWire sat down with Edelman during his visit to the firm’s downtown Seattle office for this episode of the GeekWire Podcast. It was a fitting location, given that the firm was inspired to start the Trust Barometer by the mass protests and riots at the World Trade Organization conference in Seattle nearly 25 years ago.

Listen below, or subscribe to GeekWire in Apple Podcasts, Spotify, or wherever you listen. Continue reading for edited highlights from Edelman’s comments.

The importance of trust: Trust is the central proposition in a well-functioning economy and society. And it’s four parts: ability, dependability, integrity and purpose. And we find that, after 2008 and the Great Recession, the ability part is almost taken for granted. It’s, “Can you do it all the time? Do you have standards, and do you have a moral compass?” It just isn’t enough to be able to just do. It’s, do you do it well, and you do it consistently?

Public sentiment about AI: People are looking at AI and saying, “We have to do this right.” And when we talk about the idea of suspicion of innovation, innovation should be the greatest thing ever for business. And this should be a golden time for business, because it’s the most trusted.

But if we rush this, if we put it out in a way where government isn’t seen as being able to regulate because it can’t keep up, or if it’s seen as done without context where there’s reskilling or upskilling to take care of people whose jobs are going to end, then we’re going to have a populist reaction.

AI and the 2024 elections: We need to show that, for instance, we’re going to make absolutely certain that the election goes well on information quality. There are 50 elections this year around the world, and AI is going to have a major influence on this. And all the tech companies I know are very concerned about their technology being used well.

What tech companies can do: Implementation, adaptation, and acceptance are just as important as invention. And we are not talking enough about how we’re going to do this well, and the idea that it will only come in its time, when it’s ready. We need to be really clear with people about how the experiments are being conducted.

This is a massive opportunity for business to show that it’s earned its position as the most trusted institution. It should do this revolution with government setting the boundaries, with NGOs on training in the last mile, with media to explain what they’re doing and show that, in fact, this can be done well.

Related links

Like what you're reading? Subscribe to GeekWire's free newsletters to catch every headline

Job Listings on GeekWork

Find more jobs on GeekWork. Employers, post a job here.