(Bloomberg Opinion) -- The content deals between AI companies and top publishers are coming fast and furious. The latest, and claimed to be the biggest, was announced on Wednesday: OpenAI reached an agreement with Rupert Murdoch’s News Corp., reported to be worth some $250 million over five years.
“The pact acknowledges that there is a premium for premium journalism,” News Corp. Chief Executive Officer Robert Thomson said. No terms were officially disclosed, but assuming the Wall Street Journal isn’t misreporting on its owner, the $250 million figure includes “compensation in the form of cash and credits for use of OpenAI technology.”
It’s the latest in a flurry of partnerships announced in recent weeks. The Financial Times struck its own deal with OpenAI last month, followed by Dotdash Meredith, owner of titles such as People magazine and Investopedia, which also included some collaboration on advertising tools. Other OpenAI deals have included the Associated Press, Axel Springer and Le Monde.
After the unseemly hullabaloo surrounding the ChatGPT voice that actress Scarlett Johansson contended sounded a little too much like hers, the News Corp. tie-up helps OpenAI CEO Sam Altman appear to care about human endeavor — and, in this case, the hard, expensive job of reporting the news. As he said, “We are setting the foundation for a future where AI deeply respects, enhances, and upholds the standards of world-class journalism.”
Yet it’s the comments of people like Le Monde Chairman Louis Dreyfus that stand out most to me. As one of the recent beneficiaries of Altman’s selective admiration of the journalism business, you would think Dreyfus would offer a similarly gushing tone as his News Corp. counterparts. But, speaking to the Journal, he said, “Without an agreement, they will use our content in a more or less rigorous and more or less clandestine manner without any benefit for us.”
Dreyfus is right. Indeed, it’s highly likely that OpenAI long ago ingested the content it is now paying to “get access” to as part of its rampant scraping of information in the public domain. In that sense, these deals should be thought of as settlements, the terms of which no doubt included the stipulation that publishers would not take OpenAI to court, as the New York Times has already done. (OpenAI did not respond to a request for comment.)
Face it, there is no opting out of AI. As admirable as the Times’ lawsuit is, it has been five months since it was filed, and even a preliminary hearing is months away in a case most think will go all the way to the Supreme Court. In the interim, publishers can either take a deal or not — knowing that the machine has likely guzzled up their content regardless. Sure, OpenAI says it is creating a tool to help publishers self-report what they want to be excluded from OpenAI’s training models, but it won’t be ready until 2025 at the earliest. OpenAI is, of course, just one company of several making large language models.
I find these deals rotten. They lack both transparency and adequate ethical scrutiny. No one who values the Fourth Estate should accept the prospect of AI companies, which should be the subject of journalistic scrutiny, holding what could one day, if other business models continue to crumble, be extreme control over a publication’s financial health. If you want a preview of what such subservience might look like, consider the current fears over Google’s recent search engine changes or Meta Plaforms Inc.’s decision to no longer care about amplifying news content.
Perhaps the biggest long-term problem is this: It cannot be left to people like Altman, or indeed any other individual or company, to decide which publications are deemed worthy of preferential treatment in his AI future and which are not. His company does not possess the expertise — nor the right.
What about the small local newspapers or websites (what few are left) doing the less glamorous job of following council meetings or school boards? Or the trade magazines doing the wonkier reporting on industries like energy or engineering? Do we want AI models to be devoid of the richness of independent publications? I don’t see Altman turning up at their door with cash, though these publications are also likely already in the training data. If AI is genuinely to be a knowledge hub, these sources require sustaining too — not just the publishing giants that OpenAI worries might sue.
A more equitable system must be created, backed by law. “Bespoke, secretive deals with the largest or most influential news outlets are not a replacement for public policy,” wrote Courtney Radsch, director of the Center for Journalism and Liberty at the Open Markets Institute. Author and journalism professor Jeff Jarvis suggested one approach: create a centralized platform controlled by a publishing coalition. This could aggregate content from any opted-in publisher and make it instantly and dynamically available to AI companies for a fee, based on use or some other agreed-upon metric.
The 178-year-old Associated Press might provide some inspiration here. The nonprofit wire service was first set up and paid for by newspapers to solve the tech challenges of its day: getting news from the Mexican-American War more quickly to readers up north. Today’s problems are vastly more complex, but the solution can start with a principle that was as true today as it was in 1846: Those who create the content should control it.
More From Bloomberg Opinion:
- Scarlett Johansson Proves She’s Nobody’s ChatBot: Beth Kowitt
- To Make Artificial Intelligence Safer, Use It More: Tyler Cowen
- Google’s New Search Engine Is Bad News for Web Economy: Dave Lee
This column does not necessarily reflect the opinion of the editorial board or Bloomberg LP and its owners.
Dave Lee is Bloomberg Opinion's US technology columnist. He was previously a correspondent for the Financial Times and BBC News.
More stories like this are available on bloomberg.com/opinion
©2024 Bloomberg L.P.