CelebritiesEntertainment

Congress Really Wants to Regulate A.I., But No One Seems to Know How

In February, 2019, OpenAI, a little-known artificial-intelligence company, announced that its large-language-model text generator, GPT-2, would not be released to the public “due to our concerns about malicious applications of the technology.” Among the dangers, the company stated, was a potential for misleading news articles, online impersonation, and automating the production of abusive or faked social-media content and of spam and phishing content. As a consequence, Open AI proposed that “governments should consider expanding or commencing initiatives to more systematically monitor the societal impact and diffusion of AI technologies, and to measure the progression in the capabilities of such systems.”

This week, four years after that warning, members of the Senate Judiciary Subcommittee on Privacy, Technology, and the Law met to discuss “Oversight of A.I.: Rules for Artificial Intelligence.” As has been the case with other tech hearings on the Hill, this one came after a new technology with the capacity to fundamentally alter our social and political lives was already in circulation. Like many Americans, the lawmakers became concerned about the pitfalls of large-language-model artificial intelligence in March, when OpenAI released GPT-4, the latest and most polished iteration of its text generator. At the same time, the company added it to a chatbot it had launched in November that used GPT to answer questions in a conversational way, with a confidence that is not always warranted, because GPT has a tendency to make things up.

Despite this precarity, within two months, ChatGPT became the fastest-growing consumer application in history, reaching a hundred million monthly users by the beginning of this year. It has more than a billion monthly page visits. OpenAI has also released DALL-E, an image generator that creates original pictures from a descriptive verbal prompt. Like GPT, DALL-E and other text-to-image tools have the potential to blur the line between reality and invention, a prospect that heightens our susceptibility to deception. Recently, the Republican Party released the first fully A.I.-generated attack ad; it shows what appears to be actual dystopian images from a Biden Administration’s second term.

The Senate hearing featured three experts: Sam Altman, the C.E.O. of OpenAI; Christina Montgomery, the chief privacy-and-trust officer at I.B.M.; and Gary Marcus, a professor emeritus at New York University and an A.I. entrepreneur. But it was Altman who garnered the most attention. Here was the head of the company with the hottest product in tech—one that has the potential to upend how business is conducted, how students learn, how art is made, and how humans and machines interact—and what he told the senators was that “OpenAI believes that regulation of A.I. is essential.” He is eager, he wrote in his prepared testimony, “to help policymakers as they determine how to facilitate regulation that balances incentivizing safety, while ensuring that people are able to access the technology’s benefits.”

Senator Dick Durbin, of Illinois, called the hearing “historic,” because he could not recall having executives come before lawmakers and “plead” with them to regulate their products—but this was not, in fact, the first time that a tech C.E.O. had sat in a congressional hearing room and called for more regulation. Most notably, in 2018, in the wake of the Cambridge Analytica scandal—when Facebook gave the Trump-aligned political-consultancy firm access to the personal information of nearly ninety million users, without their knowledge—the C.E.O. of Facebook, Mark Zuckerberg, told some of the same senators that he was open to more government oversight, a position he reiterated the next year, writing in the Washington Post, “I believe we need a more active role for governments and regulators.” (At the same time, Facebook was paying lobbyists millions of dollars a year to stave off government regulation.)

Like Zuckerberg, Altman prefaced his appeal for more regulation with an explanation of the guardrails that his company already employs, such as training its models to reject certain kinds of “anti-social” queries—like one I posed to ChatGPT recently, when I asked it to write the code to 3-D-print a Glock. (It did, however, write a script for a 3-D-printed slingshot. “I’d like to stress that the creation and use of this device should be done responsibly and legally,” it said, before pumping out the code.) OpenAI’s usage policies also forbid people from, among other things, using its models to create malware, generate images of child sexual abuse, plagiarize, or produce political-campaign materials, though it is not clear how the company plans to enforce them. “If we discover that your product or usage doesn’t follow these policies, we may ask you to make necessary changes,” the policy states, essentially conceding that, in many cases, OpenAI will act after a violation has occurred, rather than prevent it.

In an opening statement at the hearing, the subcommittee chair, Senator Richard Blumenthal, of Connecticut, was unsparing. “A.I. companies ought to be required to test their systems, disclose known risks, and allow independent-researcher access,” he said. And, he added, “When A.I. companies and their clients cause harm, they should be held liable.” To demonstrate his point about harm, Blumenthal had introduced his remarks with a recording of himself speaking about the need for regulation, but they were words he’d never actually uttered. Both “his” voice and “his” statement had been manufactured by artificial intelligence. The implications, especially to the politicians in the room, were chilling.

Figuring out how to assess harm or determine liability may be just as tricky as figuring out how to regulate a technology that is moving so fast that it is inadvertently breaking everything in its path. Altman, in his testimony, floated the idea of Congress creating a new government agency tasked with licensing what he called “powerful” A.I. models (though it is not clear how that word would be defined in practice). Although this is not, on its face, a bad idea, it has the potential to be a self-serving one. As Clem Delangue, the C.E.O. of the A.I. startup Hugging Face, tweeted, “Requiring a license to train models would . . . further concentrate power in the hands of a few.” In the case of OpenAI, which has been able to develop its large language models without government oversight or other regulatory encumbrances, it would put the company well ahead of its competitors, and solidify its first-past-the-post position, while constraining newer entrants to the field.

Were this to happen, it would not only give companies such as OpenAI and Microsoft (which uses GPT-4 in a number of its products, including its Bing search engine) an economic advantage but could further erode the free flow of information and ideas. Gary Marcus, the professor and A.I. entrepreneur, told the senators that “there is a real risk of a kind of technocracy combined with oligarchy, where a small number of companies influence people’s beliefs” and “do that with data that we don’t even know about.” He was referring to the fact that OpenAI and other companies have kept secret what data their large language models have been trained on, making it impossible to determine their inherent biases or to truly assess their safety.

The most imminent danger of L.L.M.s such as ChatGPT, Senator Josh Hawley, of Missouri, noted, is their ability to manipulate voters. “It’s one of my areas of greatest concern,” Altman told him. “The more general ability of these models to manipulate, to persuade, to provide sort of one-on-one, you know, interactive disinformation,” he said, “given that we’re gonna face an election next year, and these models are getting better, I think this is a significant area of concern.”

The most expedient way to eliminate this concern would be for OpenAI to take the lead and pull its L.L.M.s from the marketplace until they no longer have the capacity to manipulate voters, propagate misinformation, or in any other way undermine the democratic process. That truly would be, to quote Senator Durbin, “historic.” But that was not on offer in the hearing room. Instead, much of the discussion focussed on what kind of regulatory agency, if any, might be created, and who should serve on it, and if it would be possible to make such an agency international. It was a fascinating future exercise that ignored the present danger. Senator Blumenthal told his colleagues, “Congress has a choice now. We had the same choice when we faced social media. We failed to seize that moment.” With an election looming, and this technology in play, one does not need the predictive power of artificial intelligence to recognize that the lawmakers, despite their curiosity and bipartisan comity, have missed this moment, too. ♦

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button