OpenAI and the White House have accused DeepSeek of utilizing ChatGPT to cheaply train its brand-new chatbot.
- Experts in tech law state OpenAI has little recourse under copyright and agreement law.
- OpenAI's regards to usage might apply however are mostly unenforceable, they say.
Today, OpenAI and the White House accused DeepSeek of something akin to theft.
In a flurry of press declarations, they said the Chinese upstart had actually bombarded OpenAI's chatbots with questions and hoovered up the resulting data trove to rapidly and inexpensively train a model that's now nearly as excellent.
The Trump administration's top AI czar said this training procedure, called "distilling," amounted to intellectual residential or commercial property theft. OpenAI, meanwhile, informed Business Insider and other outlets that it's examining whether "DeepSeek might have wrongly distilled our models."
OpenAI is not stating whether the company plans to pursue legal action, instead assuring what a representative described "aggressive, proactive countermeasures to safeguard our technology."
But could it? Could it take legal action against DeepSeek on "you took our content" grounds, similar to the premises OpenAI was itself sued on in a continuous copyright claim submitted in 2023 by The New York Times and other news outlets?
BI postured this question to specialists in innovation law, who stated tough DeepSeek in the courts would be an uphill battle for OpenAI now that the content-appropriation shoe is on the other foot.
OpenAI would have a difficult time proving a copyright or copyright claim, these legal representatives stated.
![](https://deepseekcoder.github.io/static/images/table2.png)
"The concern is whether ChatGPT outputs" - implying the answers it generates in response to questions - "are copyrightable at all," Mason Kortz of Harvard Law School said.
That's because it's unclear whether the answers ChatGPT spits out certify as "imagination," he stated.
"There's a teaching that says imaginative expression is copyrightable, but truths and ideas are not," Kortz, who teaches at Harvard's Cyberlaw Clinic, stated.
![](https://www.cio.com/wp-content/uploads/2024/11/3586152-0-07559900-1730454479-Artificial-Intelligence-in-practice-.jpg?quality\u003d50\u0026strip\u003dall\u0026w\u003d1024)
"There's a substantial concern in copyright law right now about whether the outputs of a generative AI can ever constitute innovative expression or if they are necessarily unguarded truths," he added.
Could OpenAI roll those dice anyhow and declare that its outputs are protected?
That's not likely, fraternityofshadows.com the legal representatives stated.
OpenAI is already on the record in The New York Times' copyright case arguing that training AI is an allowed "fair use" exception to copyright security.
![](https://dp-cdn-deepseek.obs.cn-east-3.myhuaweicloud.com/api-docs/benchmark_1.jpeg)
If they do a 180 and inform DeepSeek that training is not a fair use, "that may return to kind of bite them," Kortz stated. "DeepSeek could state, 'Hey, weren't you simply stating that training is reasonable use?'"
There may be a difference in between the Times and DeepSeek cases, Kortz added.
"Maybe it's more transformative to turn news posts into a design" - as the Times accuses OpenAI of doing - "than it is to turn outputs of a model into another design," as DeepSeek is said to have done, bio.rogstecnologia.com.br Kortz said.
"But this still puts OpenAI in a pretty tricky scenario with regard to the line it's been toeing relating to fair use," he added.
A breach-of-contract lawsuit is more likely
![](https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5a8455d7-06e8-4e8a-ab2d-74b7b4ca15c3_1017x679.png)
A breach-of-contract suit is much likelier than an IP-based lawsuit, though it features its own set of issues, said Anupam Chander, who teaches innovation law at Georgetown University.
Related stories
The regards to service for Big Tech chatbots like those established by OpenAI and Anthropic forbid using their material as training fodder for a completing AI model.
"So perhaps that's the lawsuit you might perhaps bring - a contract-based claim, not an IP-based claim," Chander said.
"Not, 'You copied something from me,' but that you took advantage of my model to do something that you were not permitted to do under our agreement."
There might be a drawback, Chander and Kortz stated. OpenAI's terms of service need that many claims be solved through arbitration, not claims. There's an exception for lawsuits "to stop unapproved usage or abuse of the Services or copyright infringement or misappropriation."
There's a bigger drawback, though, specialists said.
"You ought to know that the dazzling scholar Mark Lemley and a coauthor argue that AI terms of usage are most likely unenforceable," Chander stated. He was describing a January 10 paper, "The Mirage of Expert System Terms of Use Restrictions," by Stanford Law's Mark A. Lemley and Peter Henderson of Princeton University's Center for Infotech Policy.
To date, "no model developer has actually attempted to implement these terms with monetary charges or injunctive relief," the paper states.
"This is likely for good reason: we think that the legal enforceability of these licenses is questionable," it adds. That's in part because design outputs "are mostly not copyrightable" and due to the fact that laws like the Digital Millennium Copyright Act and the Computer Fraud and Abuse Act "deal restricted recourse," it states.
"I believe they are most likely unenforceable," Lemley informed BI of OpenAI's terms of service, "due to the fact that DeepSeek didn't take anything copyrighted by OpenAI and because courts usually will not implement agreements not to contend in the lack of an IP right that would avoid that competitors."
Lawsuits between parties in various countries, each with its own legal and enforcement systems, are constantly difficult, Kortz said.
Even if OpenAI cleared all the above difficulties and won a judgment from a United States court or arbitrator, "in order to get DeepSeek to turn over cash or stop doing what it's doing, the enforcement would boil down to the Chinese legal system," he stated.
Here, OpenAI would be at the mercy of another incredibly complicated area of law - the enforcement of foreign judgments and the balancing of specific and business rights and nationwide sovereignty - that stretches back to before the starting of the US.
"So this is, a long, made complex, fraught procedure," Kortz added.
![](https://www.datocms-assets.com/75231/1738180897-ds-2x.png?fm\u003dwebp)
Could OpenAI have protected itself much better from a distilling attack?
"They could have utilized technical steps to obstruct repeated access to their website," Lemley stated. "But doing so would also interfere with regular customers."
He included: "I don't believe they could, or should, have a legitimate legal claim against the browsing of uncopyrightable details from a public site."
Representatives for DeepSeek did not instantly react to an ask for comment.
"We know that groups in the PRC are actively working to utilize methods, including what's called distillation, to attempt to replicate advanced U.S. AI designs," Rhianna Donaldson, an OpenAI spokesperson, informed BI in an emailed declaration.