Google DeepMind Shifts From Research Lab To AI Product Factory

The company combined its two AI labs to develop commercial services, a move that could undermine its long-running strength in foundational research.

Demis Hassabis, chief executive officer of DeepMind Technologies.

Over one week in mid-May, two companies announced artificial intelligence products built using one of Google’s seminal breakthroughs. On May 13, OpenAI Inc. introduced a new version of the model that underpins ChatGPT, its wildly popular chatbot that relies on a technology known as a transformer that Google first described in a research paper in 2017. The next day, Google announced AI Overviews, a product that offers responses to some searches with answers written by its own system based on the same technology.

The Overviews launch didn’t go well. The feature began offering embarrassing suggestions, such as advising people to eat rocks and put glue on pizza. The next week, Google implemented new guardrails, describing them in part as a way to prevent Overviews from inadvertently presenting satirical content as fact. It was a bad look for a company that could use a win in AI. Many people across tech already think products such as ChatGPT have the potential to eliminate the need for Google Search, which accounts for the majority of the company’s revenue, so the stakes can seem existential.

The stumble comes a little more than a year into one of Google’s major attempts at course correction in AI. Facing immense pressure to keep pace with OpenAI and other competitors, the company said in April 2023 that it would combine its two elite AI teams, Google Brain and DeepMind, into what has been described as an AI “super-unit” called Google DeepMind. The unit would have to achieve two separate goals, improving Google’s track record on commercial AI products while maintaining the company’s historic strength in more foundational research. The effort remains a work in progress, according to interviews with about two dozen people familiar with the company’s inner workings, most of whom requested anonymity to avoid professional reprisals.

Google DeepMind headquarters in London.Photographer: Jose Sarmento Matos/Bloomberg
Google DeepMind headquarters in London.Photographer: Jose Sarmento Matos/Bloomberg

The two labs had long operated separately. Google Brain was a place where researchers pursued passion projects with loose oversight. London-based DeepMind, co-founded by AI luminary Demis Hassabis, could be top-down and secretive. Hassabis took over as the chief executive officer of the combined unit, substantially increasing his influence within the company. Over the past year a procession of Googlers, including co-founder Sergey Brin, have visited him in London. Hassabis has spent his career primarily focused on research, but his new job will necessarily center more on commercialization. Matt Brittin, Google’s commercial chief for Europe, the Middle East and Africa, has started working directly with Hassabis’ unit. There are even rumblings within the company that the group may eventually ship products directly.

In May, the lab released a new version of AlphaFold, a landmark tool for predicting protein structures. Hassabis says it could develop into a $100 billion business, but some people at Google have questioned whether he should be dedicating so much time to it, according to two people familiar with the situation. His unit’s primary focus is Gemini, Google’s flagship AI model. At the same time, people there say they’re making progress on the model. Researchers inside the AI unit have told colleagues they’re proud of their advances on Gemini, such as its “context window,” the amount of information the system can analyze at once. This is particularly useful to a company whose enormous amount of data is one of its key competitive advantages.

Combining the two divisions plays to what Hassabis describes as another one of Google’s greatest strengths: the number of talented researchers it has working on fundamental AI. Yet it also runs the risk of upsetting a culture that’s led to Google’s success in foundational AI research. According to people familiar with the lab, some researchers are frustrated with having to follow road maps they feel have been imposed from above. Some of its biggest advances before the change came from small teams that banded together informally, and there’s a feeling that the all-hands effort leaves less room for experimentation.

Hassabis (left) and Alphabet CEO Sundar Pichai at Google’s annual I/O developers conference in Mountain View, California, last month.Photographer: Glenn Chapman/AFP/Getty Images
Hassabis (left) and Alphabet CEO Sundar Pichai at Google’s annual I/O developers conference in Mountain View, California, last month.Photographer: Glenn Chapman/AFP/Getty Images

The pressure is already leading to a sense of fatigue, say two people familiar with the company. It hasn’t helped that Gemini has suffered one snafu after another. Its launch was marred by the model’s generation of historically inaccurate images, including at least one that depicted people of color as German soldiers in World War II. Hassabis says that he’s learning more about introducing products and that Google’s product teams, in turn, are dealing with the novel challenges of generative AI, which has the potential to behave unusually when placed in the hands of the general public. “It’s different from normal technology products,” he says. “The underlying technology behaves differently, has certain strengths or weaknesses. So I think it’s an interesting learning curve for all of us.”

That Google ended up with two top-tier AI labs at all is a relic of a more freewheeling era in the company’s history, when it also had two music subscription services, two venture capital groups and two mobile operating systems. Brain came out of the company’s frontier technology group, X; it was folded into Google in 2012. Google wove the team’s work into popular products, says Brain co-founder Andrew Ng, who’s now on the board of Amazon.com Inc. “Google Brain was very practical, working with partners to ship products and driving revenue for the mothership,” Ng says.

In 2014, Google acquired DeepMind, a startup that was making tantalizing advances in AI by building programs that could play vintage video games. For much of the next decade, Google’s leaders reasoned that having separate labs was useful, fostering creativity and testing different approaches to problems, according to a person familiar with the company.

Operating more than 5,000 miles from Google headquarters in Mountain View, California, DeepMind retained a spirit of independence. Hassabis, once a teenage chess champion, seemed fixated on winning contests. In 2016 a DeepMind model beat one of the world’s best human players of Go, an ancient strategy game. People who worked for Hassabis say he’d openly muse about winning a Nobel Prize.

When Google restructured as the Alphabet conglomerate in 2015, it ended up establishing DeepMind as a separate but wholly owned subsidiary. In retrospect, the setup was bound to stoke resentment. DeepMind researchers had full visibility into Brain’s projects and code, and they could freely badge into Google’s offices in Mountain View. The openness didn’t go both ways, though: One former Brain researcher recalls working from a coffee shop on a trip to London after not being able to get into DeepMind’s office.

Some DeepMind publications bore similarities to what Brain had in the works, sparking further suspicion among Brain researchers, say three people familiar with the company’s operations. At one point, a few Brain researchers tried to keep DeepMind employees from being able to access their work, according to one person, but Brain’s leadership urged the researchers not to undermine the lab’s open culture. Conferences where Brain and DeepMind researchers ran into one another could get awkward.

A few years after the DeepMind acquisition, Hassabis and Jeff Dean, who ran Google Brain, met to try to bolster ties. But the groups continued operating in largely distinct orbits and, before the merger, were working on separate large language models. Hassabis says the rivalry between Google Brain and DeepMind was no more intense than the competitive energy within each lab. “Most of it was the spirit of collaboration and, also, mutual respect,” he says.

Dean at a 2020 Google AI event in San Francisco.Photographer: David Paul Morris/Bloomberg
Dean at a 2020 Google AI event in San Francisco.Photographer: David Paul Morris/Bloomberg

Then OpenAI released a paper describing its GPT-3 model in 2020, which caused Google to reassess how it was doing business. If a startup with fewer resources could leverage Google’s own research to jump ahead, maybe the company needed a better way of working.

Some Brain researchers concluded their own lab needed a clear power center, according to several people familiar with its operations. A major requirement for any AI work is access to computing power, and each Brain researcher received an allocation, a process that one person likened to the distribution of tickets for games at a carnival. A model the scale of GPT-3 or Gemini, though, would require far more power than any single researcher had access to, so building one would require a group of people to pool their credits. This was a tough proposition among competitive and often egotistical scholars. Google Brain’s attempts to come up with another system for doling out access got mired in bureaucracy, according to two people familiar with the company.

When Alphabet CEO Sundar Pichai announced the merger in April 2023, many Google alums applauded the decision—and wondered why it had taken so long. The name of the new unit’s first major product, Gemini, was inspired by the Greek myth in which a pair of twins fuse in the heavens.

Andrew Harrison, a former Alphabet executive who’s now CEO of venture capital firm Section 32, says the reorganization was logical. “What they did to bring it all under one banner, one set of motivations, was very smart. I think it makes it really clear to the people both inside and outside of Google what the job is,” he says. “And the job is to get the next generation of generative AI techniques in the hands of their customers.”

The merging of the labs “was surprisingly smooth and pleasant,” Hassabis says. “There was a lot more in common than there was different,” he says. Still, some workers on both sides say pure research is getting short shrift, according to people familiar with the company’s operations. After the merger, some teams that were focused on scientific applications of AI feared their projects would be scrapped, says one former employee.

While no one is getting as much computing power as they want, the supply is tighter for teams engaged in pure research, say the former employee and others familiar with the lab. But DeepMind has always had to balance proven scientific work and more exploratory research, says Sid Jayakumar, a researcher there who left in 2023 to form the startup Finster AI. “There will always be that tension,” he adds.

The discontent about pushing too hard on commercialization is a mirror image of the internal critique from the last two years, when Google was struggling with bringing generative AI to consumers. Researchers who wanted to ship products departed for startups because they thought the company was moving too slowly. According to people familiar with the lab, Brain researchers have also mourned the loss of their brand, even as some welcomed the prospect of stronger leadership.

In part, the company is trying to solve its morale problem with money: Top Google researchers are commanding several million dollars a year in total compensation, according to several people with knowledge of the situation.

Hassabis has also made the case that researchers working on foundational science should embrace commercial development as an asset. He says that he agreed to assume his new role partly because AI models are becoming more versatile and that developing commercial products yields techniques that are useful for pushing research forward. Google’s ubiquitous consumer products, he says, provide a unique testing ground for its science experiments. “We get feedback from millions of users,” Hassabis says. “And that can be incredibly useful, obviously, to improve the product, but also to improve your research.” —

More stories like this are available on bloomberg.com

©2024 Bloomberg L.P.

lock-gif
To continue reading this story
Subscribe to Unlock & Enjoy your
Subscriber-Only benefits
Still Not convinced ?  Know More
Watch LIVE TV , Get Stock Market Updates, Top Business , IPO and Latest News on NDTV Profit.
GET REGULAR UPDATES