;
Futuristic robot artificial intelligence huminoid AI programming coding technology development and machine learning concept. Robotic bionic science research for future of human life. 3D rendering.

Is the AI rise plateauing despite massive investments? There is a growing belief in Silicon Valley that the rise of artificial intelligence (AI) may be slowing down.

After the launch of ChatGPT two years ago, many expected AI improvements to accelerate exponentially as tech companies pumped more data and computational power into their models.

However, recent developments suggest that the progress toward human-level AI, or artificial general intelligence (AGI), may not be as fast as originally hoped.

According to AFP, the belief was that achieving the promise of AGI was simply a matter of resources.

The idea was that pouring enough computing power and data into AI would eventually reach a level capable of matching or surpassing human performance. 

Progress was moving so quickly that prominent figures in the industry, including Elon Musk, called for a pause on AI research. Despite this, major tech companies have continued investing heavily in AI, including Mr Musk’s own.

OpenAI, the company behind ChatGPT, raised US$6.6 billion (S$8.87 billion) to fund its AI projects, while Mr Musk’s xAI is currently raising US$6 billion (S$8.06 billion) to buy 100,000 cutting-edge Nvidia chips to power AI models.

See also  Only few Singapore business have adopted Generative AI: Report

However, there seem to be challenges on the road to AGI.

AFP reported that the main issue is that large language models (LLMs), like the ones powering ChatGPT, are not improving at the rapid rate they once were.

In the beginning, adding more data and processing power led to impressive progress. But now, performance improvements have started to plateau.

An AI expert, Gary Marcus, explained that the high valuations of companies like OpenAI and Microsoft are mainly based on the belief that “LLMs will, with continued scaling, become AGI.”

However, he noted that he has always warned that it’s “just a fantasy.”

Another challenge is the “finite” amount of language-based data for training AI models. Scott Stevenson, CEO of AI firm Spellbook, argued that focusing only on feeding AI more language data will eventually hit a wall.

He explained that some labs focused too much on just feeding in more language, thinking it would make the AI smarter.

See also  Singapore currently not looking at regulating AI, says IMDA

Sasha Luccioni, a researcher and AI lead at Hugging Face, agreed that progress was bound to slow down. She said that the focus on making AI bigger without considering its purpose has reached its limit.

According to her, “The pursuit of AGI has always been unrealistic, and the ‘bigger is better’ approach to AI was bound to hit a limit eventually.

However, OpenAI CEO Sam Altman posted on X on Thursday, saying, “There is no wall,” without further elaboration.

Dario Amodei, CEO of Anthropic, the company behind the Claude chatbot in partnership with Amazon, is also optimistic.

He said that if you “eyeball” at how quickly these abilities are improving, it makes you think AI would match human performance by 2026 or 2027.

Despite this optimism, OpenAI has delayed the release of the successor to GPT-4, which powers ChatGPT, as its improvements did not meet expectations, as per sources quoted by The Information.

See also  ‘Promise and Pitfalls of AI’ in focus at Singapore FinTech Festival 2023

The company is now focusing on using its current capabilities more efficiently.

This change is reflected in their recent o1 model, which aims to provide more accurate answers by improving reasoning instead of adding more data.

Mr Stevenson said that OpenAI’s focus on teaching its model to think more before responding has led to “radical improvements.”

He compared AI development to the discovery of fire, suggesting that it’s time to apply AI to specific tasks rather than adding more data and computing power.

Walter De Brouwer, a Stanford professor, compared advanced LLMs to students moving from high school to university, noting that the “AI baby was a chatbot which did a lot of improv” prone to mistakes.

He said, “The homo sapiens approach of thinking before leaping is coming.” /TISG

Featured image by Depositphotos (for illustration purposes only)