ADVERTISEMENT

AI Can Replace CEOs When It Starts Thinking Like Us

Artificial intelligence may be able to replace top leadership at companies in the future, but experts say emotional and ethical scenarios haven’t been factored in just yet.

<div class="paragraphs"><p>This image is AI-generated (Source: Meta AI)</p></div>
This image is AI-generated (Source: Meta AI)

The use of generative artificial intelligence has been seeing an uptick in the last couple of years. The technology is being developed at a rapid pace, with several companies vying to become leaders in the industry.

GenAI has seen proliferation into the workforce. Employee optimism regarding the technology is increasing. In fact, 27% of India’s white-collar workers use genAI daily, according to PwC’s Asia Pacific Hopes and Fears 2024 report. That’s the highest in the APAC region.

Similarly, the technology has made its way into the lives of senior management at companies. It’s a given that leadership at tech-based entities end up using genAI day-to-day.

The big question is this: Can genAI be leveraged to replace Chief Experience Officers at companies?

Current Technology

AI-based companies in India have been developing genAI tools to make senior leaderships' lives easier. Newgen Software Technologies Ltd. recently released a platform called LumYn, designed to help decision-makers within banks. On the other hand, Fractal Analytics Inc. has developed Marshall Bot, a chatbot based on Marshall Goldsmith, a well-known leadership coach.

In the case of LumYn, the platform goes through a bank’s data to find possible ways for leadership, to gain insights into the kind of products and services customers use often. These insights can then be leveraged to drive business growth, Newgen had said at the time.

The datasets required to build these two platforms are vastly different. LumYn has been built on datasets and behavioural patterns that are roughly common across retail banks. While, Marshall Bot was created by converting Goldsmith’s knowledge base, across the individual, his training sessions and books into a dataset to train an AI.

There are global cases as well. In 2022, a Chinese company listed on the HangSeng index, called NetDragon Websoft Holdings Ltd., announced a “rotating CEO” for its subsidiary Fujian NetDragon Websoft Co. The AI, called Ms. Tang Yu, was being used to streamline process flow, enhance quality of work tasks, and improve speed of execution, according to a company press release.

In 2023, a Poland-based luxury rum producer did the same thing. They introduced an AI-powered humanoid robot chief executive officer called Mika. The bot’s responsibilities included finding potential artists to design the brand’s bottle and spotting potential clients, according to a video interview with Reuters. Major decisions, like the hiring and firing of people were still being made by human executives.

Marshall Bot started out with a knowledge base of about 1 million words in 2022 to 3 million in 2024. To create the bot, Fractal ended up using retrieval augmentation generation, combined with a large language model to build the database for the AI to use to answer questions.

RAG combines traditional information retrieval systems like databases with an LLM. Essentially, when provided with a query, a bot will first go through a database and then combine its AI capabilities to provide a more accurate answer that caters specifically to the user’s query.

“Marshall Bot can answer questions about coaching, leadership and business. What it can’t do, is answer beyond this realm,” said Fractal Analytics’ Principal Strategy Manager Jay Amin. The bot has been designed keeping a very specific use case in mind. In addition, Goldsmith didn’t want to have his likeness or advice attached to answers on subjects that he wasn’t confident about.

Herein lies the issue when it comes to getting an AI to replace CXO level leadership.

Opinion
India's Retail Sector Turning To AI For Chalking Out Business Plans

CEOs And AI

A CEO, for example, isn’t just relegated to decision-making with a top-down view. The scope of their work tends to be much broader than what an AI can do reasonably, for now.

“Some of a CEO’s responsibilities include interactions in the real world. The knowledge of the same is in the physical world,” Amin said. There are also several verbal and non-verbal cues that are only possible to identify and interact with in the real world. “Reading the room” is what Amin calls it.

To put genAI in such a scenario would mean that it will provide the wrong answers. It’s because genAI models don’t have access to the right data to provide the right answers and haven’t learnt on the real world.

Even though the adoption of genAI and its integration into companies has been limited so far, a PwC survey has found that CEOs anticipate greater impact from the technology going ahead. According to the report, 70% of CEOs believe that genAI will change the way a company creates, delivers and captures value in the next three years.

There are some that don’t think the technology should replace top leadership. Instead, AI should do what it does now for CXOs, act as a copilot. "It could learn by being a part of meetings and discussions,” said Ankush Sabharwal, chief executive officer of Corover.ai. His company has built BharatGPT, one of the few locally created LLMs specifically designed for the Indian market.

Sabharwal and the folks at Fractal have the same opinion about AI replacing the C-suite: it can’t replace the gut feeling that comes with making business decisions. “I think executives go with their gut, and then they prove their decisions with data.”

While the current thinking from experts are in a certain direction, a 2023 survey from EdX reveals the complete opposite. In its survey, the company found that 47% of top management executives believe “most” or “all” of the CEO role should be completely automated or replaced by AI. Additionally, 49% of CEOs agreed that they should be.

In contrast, only 20% of knowledge workers, that is, those with domain specific skills, say that they can be replaced by AI. However, whether this is overconfidence or a potential misunderstanding of AI’s capability is unclear.

“For AI to be truly effective, it must be underpinned by robust frameworks and comprehensive data, encompassing industry trends, market analytics, and financial records,” NEC Corporation India's President and Chief Executive Officer Aalok Kumar told NDTV Profit in an email response to queries.

Being Proactive

A key area where an AI is likely to stumble is an area where C-suite executives excel—proactiveness. While CEOs can react to macro events like changes in markets, unprecedented global events, supply chain issues, among a host of problems, AI doesn’t have that capability.

Anthropic’s Claude 3.5 Sonnet, OpenAI’s GPT-4o and Ola’s Krutrim are all reactive in nature. You ask and they respond. But, the capability to predict requirements or notice trends before they arise is missing.

But that might be by design. “We don’t know what the emergent behaviour from an AI will be, whether its safe for humans or aligned with our values,” says Akbar Mohammed, head of Fractal Dimension, adding that making the technology proactive hasn’t been fully tested. Fractal Dimension is an inter-disciplinary team at Fractal Analytics that works on responsibility and sustainable AI-adoption strategies.

However, it is possible to build an AI model to be proactive. But in general, those are low-stakes environments, like push notifications while making purchases of e-commerce platforms or booking flight tickets.

There’s also the dimension of how high the stakes can be, which should be kept in mind, according to Mohammed. He used the example of implementing AI in healthcare, which may or may not be safe, depending on the parameters it is given to function on. “AI still hasn’t shown the ability fully to be able to protect us or fully be aligned with human values.”

AI is still best used as part of an augmentation toolkit as opposed to full replacements to human decision making, according to him.

Opinion
PwC India, Google Cloud To Offer AI-Powered Security Operations To Indian Companies

Training An AI CEO

Whether its training an AI to take over as CEO, or help you book tickets, the requirements are the same. Every single AI model worldwide needs to be trained on data.

The way training an AI works is that you provide it with a dataset and a specific use-case in mind. Then you refine the results it produces by constantly aligning the parameters to provide the answers required. But in the case of training an AI CEO, what sort of datasets does one need? Corover’s Sabharwal says that common behaviours across around 4,000 CEOs may be enough for a robust dataset.

For context, that’s over half of public companies listed on the National Stock Exchange (2,266 as of Dec. 31, 2023) and roughly 75% of the ones listed on the Bombay Stock Exchange (5,309 as of Jan. 24, 2024). There’s the size of companies to consider. The roles and responsibilities of a CEO at a mid-sized corporation would be vastly different as compared to one of larger company. But overlaps are likely.

“Every role has its own requirement and specific assessment indicators. The AIs we create will also have to work accordingly,” said Sanjeev Menon, co-founder and head of tech and product of E42.ai. His company has been building AI co-workers that automate complex, procedure intensive processes, that have normally been performed by people.

But even to build an AI that can handle the role that a CEO plays, there still needs to be training on generalised capabilities. In that regard, the dataset requirement that Sabharwal mentions is important. It’s only once that is taken care of can one potentially think of narrowing training down to cater to a specific company.

“It’s never going to be a 'one size fits all' model. An AI will have to be trained on an organisation’s data to be really effective and it will have to go through reinforcement learning through human feedback for a while, for it to at least start being able to provide some level of decision-making support,” Menon said.

A core part of the human experience, CEO or otherwise, is the understanding of “concepts” and their application across different situations. The reason humans are able to apply concepts widely is because we’re able to draw on our own past experiences and knowledge to understand when and where to use them. This isn’t the case for LLMs and genAI models.

At the core, all generative AI models and LLMs work based off pattern recognition.

Technically, via machine learning, an AI model can be taught a specific “concept.” For example, to teach an AI what a cat is, researchers end up providing it with a lot of data—in this case, picture of cats.

For the AI to learn, it uses a set of rules already fed into the system to analyse the pictures and find patterns. Based on these rules, it is able to create its own image of what a cat looks like and then identify images of cats based on the patterns it has identified. This is broadly how most AI models are built, implemented and constantly tweaked.

“AI learning using conceptual knowledge in a completely new environment for a completely different problem altogether is the 'aha!' moment,” according to Menon.

Conceptual learning in AI has been a longstanding debate. Since at least the 1980s, philosophers and cognitive scientists have proposed that AI isn’t capable of interconnecting concepts and apply them in new settings, called “conceptual generalisations.” But recent work suggests that we’re a step closer.

A researcher duo from New York University and Spain’s Pompeu Fabra University published a paper last year, showing off a new approach to teaching AI, called multi-learning for compositionality. The technique works by “training neural networks—the engines driving ChatGPT and related technologies for speech recognition and natural language processing—to become better at compositional generalisation through practice", according to a press release.

The research has found that this method outperforms existing approaches and is on-par and in some cases, better than human performance. While models like ChatGPT-3 and GPT-4 have previously showcased such behaviour in some instances, it’s not on the same scale as the researchers have found. AI models have previously struggled in emulating the kind of thinking that is seen in humans.

Multi-learning for compositionality can be used to teach AI models novel concepts, where we’ve previously struggled and let it apply concepts in cases where it couldn’t before.

Opinion
AI To Revolutionise Citizen Services In India, Says Former SBI Chief

Emotions And Other Stumbling Blocks

Perhaps the biggest challenge in training an AI to take over a company is emotional intelligence, something that researchers have made little headway with so far. Aspects of human intelligence like providing inspiration and motivation, resolving political conflicts, as well as people management, are things that AI still hasn’t figured out.

“The current state of affairs of how AI is assessed today don’t look at these things. The focus is measures on how good they are factually and how creative they are. A CEO's role is more complex,” said Mohammed. “We haven’t seen that level of creativity or independent action from large language models."

Once we’ve passed technical hurdles, can we ask AI to run a profitable company over 10-15 years, keeping in mind how markets operate, macro events and other factors? It’s not that simple.

“If we don’t put in the appropriate guardrails, AI can just as easily say that employees need to work 16-18-hour workdays or it’ll one day decide to layoff 80% of the staff,” said Fractal’s Amin.

There are other questions to consider as well, like are workers ready to report to a synthetic intelligence? Adding in parameters and keeping in mind ethical considerations are still very much active and integral conversations happening in the industry.

“AI is a powerful tool, but cannot replace the human elements of emotional intelligence and empathy crucial in C-suite roles,” said NEC Corp.'s Kumar.

There’s also the cost aspect to consider. Training LLMs like GPT-4 is incredibly expensive. OpenAI’s CEO Sam Altman has previously said that building the model cost $100 million. NDTV Profit has previously reported how building AI in India need not emulate the West, instead could focus on building for hyper-specific use cases. That could be a potential workaround.

Opinion
Meta Rolls Out AI Assistant In India Across Instagram, WhatsApp, Messenger