Beyond Tomorrow: Gen AI In Banking, New OpenAI Model — Weekly AI Roundup
Decluttering developments in AI this week, from Google's AI Overviews hallucinating, to a Gen AI platform for banks and the possibility of a GPT doing financial analysis.
The way conversation around artificial intelligence has picked up, there’s no dearth of news, and that’s a good thing for people who report on it. This week was particularly eventful, with notable developments and some weekend action.
Whether its taking financial advice from generative AI, or Google’s inability to get its forays into the burgeoning technology right, we’ve got you covered.
Google’s AI Score: 0-3
Oh, Google, when will you learn? Over the past weekend, Google's recently introduced AI Overviews, designed to provide AI-generated responses to search queries, experienced a major meltdown. In many instances, the answers it generated were not just incorrect but also bizarre and potentially harmful. Asking Google how to ensure cheese sticks to pizza shouldn't lead to suggestions like adding non-toxic glue or petrol to your noodles.
While some of these examples may seem more amusing than alarming (thankfully), it's concerning given that Google seems to be veering away from its own motto of 'Do No Evil'. This isn't the first time Google has stumbled with the launch of an AI product.
The first misstep occurred in February 2023 when the tech giant introduced Bard, the predecessor to Gemini. The AI publicly provided inaccurate information, such as falsely claiming that the James Webb Telescope had captured the first images of a planet outside our solar system.
Fast forward to February this year, and Gemini's image generation tool refused to create images of white individuals, resulting in images of Nazis across a racial spectrum. Following public outcry, Google promptly removed the tool from Gemini.
Google's decision to let AI take the reins in providing summarised answers to queries clearly still has a long way to go.
ALSO READ
Porsche Car Crash Investigation: Police To Employ AI For Digital Reconstruction Of Accident Scene
OpenAI Staffer Moves To Rival Anthropic And More
We can’t go a week without mentioning OpenAI, and its only because it's so interesting. In the past few weeks, the company had reality television levels of drama, but that seems to be cooling off a little bit now.
Former OpenAI executive and leader of the company’s now dissolved superalignment team, Jan Lieke, has announced that he will be joining Anthropic, the company building the AI Claude. In a post on X (formerly Twitter), the machine learning researcher announced that his new team would be working on “scalable oversight, weak-to-strong generalisation, and automated alignment research".
I'm excited to join @AnthropicAI to continue the superalignment mission!
— Jan Leike (@janleike) May 28, 2024
My new team will work on scalable oversight, weak-to-strong generalization, and automated alignment research.
If you're interested in joining, my dms are open.
In other news, OpenAI has quietly announced it's working on a new model, and is partnering with news organisations to help integrate AI into their newsrooms.
The announcement that the company is working on a new model was mentioned offhandedly in a single line in blog post, declaring the formation of a Safety and Security Committee. The newly instituted board consists of directors Bret Taylor (chair), Adam D’Angelo, Nicole Seligman, and Sam Altman (chief executive officer).
In the blog post, the company said that the committee’s first task will be to evaluate and further develop OpenAI’s processes and safeguards over the next three months, show it to the board and after approval, share it with the public.
On Wednesday, the company announced two partnerships; one is a content and product deal with multi-platform magazine The Atlantic, and the other is with the World Association of News Publishers for an accelerator programme for over 100 news publishers worldwide.
The partnerships with news organisations is a standout, if for nothing else but for two reasons:
OpenAI is currently being sued by the New York Times, for copyright infringement. The news organisation didn’t take too kindly to the AI company’s models trawling their content to get smarter.
It looks like news publishers are slowly coming to terms with the fact that AI is here to stay. We know how the old verbiage goes—'if you can’t beat ‘em, join ‘em.' That’s exactly what companies are doing now. It’ll be worth looking out for the work and collaborations coming out of these partnerships and how they’re changing the news industry as a whole.
Newgen Software Launches Gen AI Platform For Banks
Indian services and products company Newgen Software launched a Gen AI platform for banks, called LumYn. The platform is meant for c-suite level leadership, which the company claims will help banks "enhance profitability and significantly improve customer experiences".
The platform has built datasets and behavioural patterns that are commonly seen across Indian banks. When purchased, it fine tunes a bank’s data to provide data-backed answers to help the business’ leaders make decisions. Will it replace the suits that sit right at the top of the food chain at a bank? “Never,” said Newgen’s Head of AI Practice Rajan Nagina.
You can read more about it here.
ChatGPT Can Make You Money? Maybe
Researchers at the University of Chicago's Booth School of Business subjected a large language model (LLM) to a series of prompts to teach it how to accurately analyze financial statements, akin to a human analyst. Their findings shed light on the model's capabilities:
Can an LLM replace a human analyst? No, but it can certainly complement human analysis and fill in gaps for an underperforming analyst. However, human analysts excel in providing additional context that an LLM may lack awareness of.
According to the Booth researchers, the LLM's performance matched that of specialised machine learning models dedicated to predicting earnings trends and company trajectories—an impressive feat. They believe the LLM holds significant promise for investors and regulators alike.
In an amusing twist, despite extensive testing, the researchers concluded that understanding the model's predictions remains elusive. They noted that it has been "empirically difficult to pinpoint how and why the model performs well."
Beyond Tomorrow is a weekly newsletter sent to your inbox every Saturday to give you a roundup of everything AI in the last week.