In February OpenAI catapulted itself into the public eye when it produced a language model so good at generating fake news that the organization decided not to release it. Some within the AI research community argued it was a smart precaution; others wrote it off as a publicity stunt. The lab itself, a small San Francisco-based for-profit that seeks to create artificial general intelligence, has firmly held that it is an important experiment in how to handle high-stakes research.
Now six months later, the policy team has published a paper examining the impact of the decision thus far. Alongside it, the lab has released a version of the model, known as GPT-2, that’s half the size of the full one, which has still not been released.
In May, a few months after GPT-2’s initial debut, OpenAI revised its stance on withholding the full code to what it calls a “staged release”—the staggered release of incrementally larger versions of the model in a ramp-up to the full one. In February, it published a version of the model that was merely 8% of the size of the full one. It published another roughly a quarter of the full version before the most recent release. During this process, it also partnered with selected research institutions to study the full model’s implications.
The report details what OpenAI learned throughout this process. It notes that both the staged release and research partnership agreements proved to be processes worth replicating in the future. They helped OpenAI better understand and anticipate the possible malicious uses of GPT-2. And indeed, the research partners were able to better quantify some of the threats that were only previously speculative. A study conducted by collaborators at Cornell University, for example, found that readers on average believed GPT-2’s outputs to be genuine news articles nearly as often as New York Times ones. Several researchers outside of official partnerships also began tackling the challenge of detecting machine-generated text.
The authors concluded that after careful monitoring, OpenAI had not yet found any attempts of malicious use but had seen multiple beneficial applications, including in code autocompletion, grammar help, and developing question-answering systems for medical assistance. As a result, the lab felt that releasing the most recent code was ultimately more beneficial. Other researchers argue that several successful efforts to replicate GPT-2 have made OpenAI’s withholding of the code moot anyway.
The report has received a mixed response. Some have lauded OpenAI for sparking a discussion and introducing a new set of norms that didn’t previously exist. “The staged release of GPT-2 [...] was a useful experiment,” says Peter Eckersley, the director of research at the Partnership on AI, of which OpenAI is a member. “Through gathering the AI community to debate these matters, we've found there are many subtle pieces that need to be gotten right in deciding when and how to publish research that has a risk of unintended consequences or malicious uses.”
Others, however, have remained critical of OpenAI’s decisions. Vanya Cohen, a recent master’s graduate from Brown University who recreated an open-source version of GPT-2, argues that withholding the model does more to slow down countermeasures research than replication. “Large language models like GPT-2 are the best currently available tools for identifying fake text generated by these same models,” he says.
Still others were more measured: “I don’t think a staged release was particularly useful in this case because the work is very easily replicable,” says Chip Huyen, a deep learning engineer at Nvidia. “But it might be useful in the way that it sets a precedent for future projects. People will see staged release as an alternative option.” Oren Etzioni, the CEO of the Allen Institute for Artificial Intelligence, which also adopted a staged release for its language model Grover, echoes the sentiment: “I applaud their intent to design a thoughtful, gradual release process for AI technology but question whether all the fanfare was warranted.”
Jack Clark, the policy director of OpenAI, places GPT-2 in the context of the organization’s broader mission. “If we are successful as an AI community in being able to build [artificial general intelligence], we will need a huge amount of historical examples from within AI” of how to handle high-stakes research, he says. “But what if there aren’t any historical examples? Well, then you have to generate [your own] evidence—which is what we’re doing.”