Skip to content
Menu
Menu

Why traditional metrics don’t measure the true value of artificial intelligence, at least for now

Because AI pilots were rushed onto legacy systems without first adjusting those underlying processes, the initial impact is fairly limited, comments Finoverse co-founder Anthony Sar
  • Investment house Jefferies argues that despite spending less than their global peers, Chinese developers have managed to narrow the performance gap among competing AI models.

ARTICLE BY

PUBLISHED

ARTICLE BY

PUBLISHED

UPDATED: 01 Dec 2025, 8:05 am

In a McKinsey survey published earlier this month, the consultancy group found that while 39 percent of its respondents said artificial intelligence (AI) had some effect on their corporate earnings, fewer than 5 percent believed it was directly linked, highlighting the challenges of identifying how, and when, AI will usher in a secular growth cycle poised to reshape the global economy.

Trillions continue to flow into AI-related commitments. But as cost discipline tightens, quantifying their immediate impact based on traditional financial metrics is misguided, notes Anthony Sar, chief executive officer and co-founder of Finoverse, a Hong Kong-based networking firm.

[See more: Identifying the AI bubble: Are we there yet?]

“Amid the rollout of new AI models, many rushed these pilots onto legacy systems without first adjusting their underlying processes, so the productivity gains simply haven’t materialised or have been financially impactful,” Sar explains to Macao News, arguing that AI’s full potential might require a complete redesign of workflows, risk controls, and data governance from the ground up.

His analysis follows a J-curve trajectory, where operational metrics initially dip before eventually rising later. A similar case can be seen with the late 19th-century introduction of electricity, which yielded marginal gains until factory floors were redesigned, and production lines migrated away from previously manually run machines to mechanised ones.

When money matters 

But until AI’s profitability becomes more visible, analysts warn that market valuations remain vulnerable to a possible sell‑off, drawing comparisons to the aftermath of the Dot Com Bubble. With over 800 million active weekly users compared with only 35 million paid subscribers, closed‑source large language model (LLM) ChatGPT emerges as a monetisation bellwether, where less than five percent of total active weekly users is projected to generate almost $13 billion for its parent company, OpenAI, which remains loss making.

The path towards profitability will encounter meaningful competition. In addition to expensive closed-source models, cheaper open-source ones developed by Chinese developers have managed to successfully narrow the performance gap against costlier iterations. Aggregate downloads of open-source LLMs extends this picture, with Alibaba’s Qwen registering 385 million downloads in October, surpassing the 346 million for Meta’s Llama, based on data from the Hugging Face platform and compiled by the ATOM Project, a US coalition in support of open-source AI.

Prioritising an open‑source approach should prompt an inference boom where lower AI pricing drives faster application development and user adoption, notes Edison Lee, equity analyst at Jefferies. Lee argues that although China’s AI inference relies on fewer advanced semiconductors, those investments could yield better returns because their combined capital expenditure is significantly lower, estimating the total amount to be around $124 billion spent between 2023 and 2025, representing less than one‑fifth of the nearly $700 billion spent by US hyperscalers over the same period.

Just a little longer

Polling from Deloitte conducted back in October found that while most respondents expect to achieve a satisfactory return on their AI investments within two to four years, that window represents a significantly longer payback period than the seven to twelve months other technologies have normally taken.

“In our own experience, the AI’s value did not come from automating old processes, but from driving entirely new ones,” Finoverse’s Sar shares, suggesting the real success metric is contingent on AI enabling materially better customer or user experiences, not whether back-office duties were streamlined or savings were reached.  

[See more: AI will make us ‘rediscover what it means to be human’, expert promises]

During Hong Kong FinTech Week in November, Finoverse deployed Samantha, an AI assistant designed to handle 1,500 live voice calls and generate personalised agendas at scale, with Sar describing the process as an example of something previously near impossible to achieve manually, but is now commercially feasible.

“AI adoption will be driven more by new revenue opportunities over cost reduction alone,” he predicts, adding that the transition will be slow, because organisations will have to move beyond just optimising workflows, but rethink their business fundamentals. 

UPDATED: 01 Dec 2025, 8:05 am