OpenAI CEO Sam Altman testifies before the U.S. Senate Judiciary Committee’s Subcommittee on Privacy, Technology and the Law on Tuesday, May 16, 2023. (Image via webcast.)

OpenAI’s partnership with Microsoft, and the implications of the tech giant’s stake in the artificial intelligence powerhouse, were raised as an issue Tuesday during a U.S. Senate Judiciary subcommittee hearing on AI that featured testimony from OpenAI CEO Sam Altman and other industry players.

Altman, in a marked contrast from tech leaders of the past, actively encouraged legislative and regulatory oversight of the industry.

What’s more, he said the oversight should focus heavily on the small number of players, including OpenAI and Microsoft, with the capability to train and deploy the large language models that power generative AI tools.

“It is absolutely true that the number of companies that can train the true frontier models is going to be small, just because of the resources required,” Altman said toward the end of the three-hour hearing, responding to Sen. Cory Booker (D., N.J.). “And so I think there needs to be incredible scrutiny on us and our competitors.”

Altman added, “I think there is a rich and exciting industry happening of incredibly good research and new startups that are not just using our models, but creating their own. And I think it’s important to make sure that whatever regulatory stuff happens, whatever new agencies may or may not happen, we preserve that fire, because that’s critical.”

In his opening statement, Altman pointed out that OpenAI is governed by a nonprofit, with a mission and a charter “that commit us to working to ensure the broad distribution of the benefits of AI and to maximizing the safety of AI systems.”

However, Gary Marcus, a professor emeritus at New York University who focuses on AI issues, called out Microsoft’s investment in OpenAI as an issue with the potential to undermine its stated mission of altruism.

“Seven years later, they’re largely beholden to Microsoft, embroiled in an epic battle of search engines that routinely make things up. And that’s forced [Google parent] Alphabet to rush out products and deemphasize safety,” Marcus said.

He added, “Humanity has taken a backseat. AI is moving incredibly fast with lots of potential but also lots of risks.”

Microsoft CEO Satya Nadella and OpenAI CEO Sam Altman at the Microsoft campus in Redmond, Wash. on July 15, 2019. (Photography by Scott Eklund/Red Box Pictures)

Separately, in a pre-taped interview with CNBC’s Andrew Ross Sorkin, Microsoft CEO Satya Nadella disputed Elon Musk’s past assertion that Microsoft controls OpenAI.

“Look, while I have a lot of respect for Elon and all that he does, I’d just say that’s factually not correct,” Nadella said, according to an early excerpt from the interview. “OpenAI is very grounded in their mission of being controlled by a nonprofit board. We have a noncontrolling interest in it, we have a great commercial partnership in it.”

OpenAI was started as a non-profit in 2016 by Musk, Altman, Ilya Sutskever, OpenAI’s chief scientist, and Greg Brockman, former Stripe CTO. Musk, the SpaceX and Tesla founder, and now Twitter owner, is no longer involved.

In 2019, OpenAI launched a related “capped profit” company with Altman as CEO.

Answering questions during the hearing, Altman acknowledged the risks that AI poses to jobs, and society, and agreed that powerful new AI engines should undergo the same level of regulatory scrutiny as nuclear reactors do.

But he also offered a broader long-term perspective.

“I believe that there will be far greater jobs on the other side of this, and that the jobs of today will get better,” he said, adding, “I think it’s important to understand and think about GPT-4 as a tool, not a creature, which is easy to get confused. And it’s a tool that people have a great deal of control over and how they use it.”

Altman said early users seem to generally understand the limitations and flaws inherent to AI in its current form. However, he also expressed concern about what will happen when AI becomes more accurate and reliable, because it may stop human users from being skeptical and taking steps to check the technology.

Altman’s remarks seemed to be generally well-received by members of the U.S. Senate Judiciary Committee’s Subcommittee on Privacy, Technology and the Law. His comments reflected his ethos as a startup founder who led the Y Combinator startup accelerator in Silicon Valley prior to taking the reins as OpenAI’s CEO.

Christina Montgomery, IBM’s chief privacy and trust officer told the subcommittee that the current public focus on generative AI makes this “precisely the time to define and build the right guardrails to protect people and their interests.

At the same time, she said, “AI is just a tool, and tools can serve different purposes.”

“To that end, IBM urges Congress to adopt a precision regulation approach to AI,” she added. “This means establishing rules to govern the deployment of AI and specific use cases, not regulating the technology itself.”

Sen. Richard Blumenthal (D., Conn.), who chairs the subcommittee, opened the hearing by playing a voice clone of himself reading a script that was generated by ChatGPT when Blumenthal asked the AI engine what Sen. Blumenthal might say to open such a hearing. Here’s what it said.

“Too often, we have seen what happens when technology outpaces regulation, the unbridled exploitation of personal data, the proliferation of disinformation, and the deepening of societal inequalities. We have seen how algorithmic biases can perpetuate discrimination, and prejudice, and how the lack of transparency can undermine public trust. This is not the future we want.”

Blumenthal called the script and the reasoning behind it “pretty impressive.”

In the same spirit, here’s how ChatGPT summarized Altman’s opening statement

Sam Altman’s statement highlights several key points:

  1. Potential of AI: Altman believes in the potential of AI to improve nearly every aspect of our lives, from productivity to creative work, to addressing significant challenges like climate change and cancer. He draws an analogy to the transformative impact of the printing press.
  2. OpenAI’s Mission and Structure: OpenAI is governed by a nonprofit, with its activities driven by a mission and charter committed to ensuring the broad distribution of AI benefits and maximizing safety. Their aim is to build tools that can contribute significantly to humanity.
  3. Safety and Responsibility: Before releasing any AI system, OpenAI conducts extensive testing, engages external experts for audits, improves model behavior, and implements robust safety measures. Altman noted that for their latest model, GPT-4, they spent over six months conducting evaluations and testing.
  4. Regulation and Cooperation with Government: Altman suggests that regulatory intervention by governments will be critical to mitigate the risks of increasingly powerful AI models. He proposes a combination of licensing and testing requirements for AI models above a certain capability threshold. Altman expresses a willingness to cooperate with governments to develop and update safety measures and explore opportunities for global coordination.
  5. Anxiety about AI: Altman acknowledges the anxiety that many people feel about how AI might change our lives. He believes it is essential to work together to identify and manage potential downsides so that we can all enjoy the benefits.
  6. Importance of Democratic Values: He underscores that powerful AI should be developed with democratic values in mind, emphasizing the importance of US leadership in this domain.
  7. Looking Forward: Altman expresses optimism about the ability to mitigate the risks of AI, and believes in its potential to grow the US economy and the world’s. He looks forward to working with stakeholders to meet the challenges and opportunities of this technological moment.

Access the archived video of the hearing here.

Like what you're reading? Subscribe to GeekWire's free newsletters to catch every headline

Job Listings on GeekWork

Find more jobs on GeekWork. Employers, post a job here.