When Professor Tshilidzi Marwala asked ChatGPT who Tshianeo Marwala was, the large language model (LLM) replied that the person in question was his wife. The response was incorrect, as Tshianeo Marwala was, in fact, Professor Marwala’s grandmother.
The answer was 89 percent accurate, the mathematician claimed, though it was inevitably false. Because deep learning models make predictions based on likelihoods, the algorithm followed the higher probability that the woman of the same family name would more likely be a spouse rather than a direct ancestor. Accuracy is continuous, but the truth is discrete, Marwala said, demonstrating the shortcoming when the two notions are conflated in an AI context.
[See more: The world’s first wind-powered underwater data centre is in Shanghai]
The UNU rector and UN under-secretary-general shared these remarks at the opening of the UNU Macau AI Conference held this past Friday under the theme “AI for Humanity: Building an Equitable Digital Future,” an event which also commemorated the 80th anniversary of the United Nations and the 50th anniversary of the United Nations University. Professor Marwala was one of over 150 speakers at this year’s symposium, drawing 500 participants from across 30 countries to discuss the urgent need to shape AI policy frameworks.

The Silicon Valley Effect
Despite overlapping recommendations among the 30 parallel and 5 keynote sessions for multi-stakeholder dialogue, the summit convened amid a global race for AI dominance. Between US$700 billion and US$1.5 trillion is expected to flow into AI-related infrastructure this year, with total spending expected to reach US$3 trillion by 2029, spurring unprecedented demand for semiconductors, data centres, and reliable energy sources to power the industrial transformation.
Efforts to establish governance oversight has largely faltered. Innovation continues to take precedence in a dynamic often referred to as the Silicon Valley Effect, leaving tech innovators to shape the direction of institutional oversight. But this isn’t the only factor.
In his keynote address entitled “Is there a path to an Equitable AI Future?” Simon See, head of the NVIDIA AI Technology Centre and Professor at Shanghai Jiao Tong University, argued that falling technological costs played a significant role in making AI more accessible, but also highlighted infrastructure inadequacies and funding gaps as pivotal barriers in achieving wider implementation.
[See more: Identifying the AI bubble: Are we there yet?]
Conference panellists further that acknowledged steps to broaden AI’s collective benefits were frequently burdened by contradictory goals. While AI was recognised as a powerful tool in fighting climate change, substantial amounts of energy would be required to train advanced models to reach applicable solutions.
Meanwhile, even as BRICS (Brazil, Russia, India, China, and South Africa) countries have developed an alternative and innovative governance framework, expanding towards a BRICS+ coalition inherits additional views and priorities, leaving Silicon Valley’s single innovation mandate almost unchallenged as it sets the industry narrative.
Changes in human behaviour
With youth education and AI literacy identified as principal steps to catalyse AI-driven solutions, a new UNU–Springer Book series was launched at this year’s conference. The programme was established to accommodate budding demand for evidence-based research that could empower actionable strategies that meet today’s political and economic realities.
Though summits like this propose a governance framework to balance ethical imperatives with practical deployment, it would be through the adaptation of a harmonised AI language rooted in education that could ultimately uphold changes in human behaviour, Professor Marwala said, arguing that encoding human values and societal goals into LLMs would mitigate adverse effects and build greater trust among policy influencers and tech innovators.
[See more: AI education is now compulsory at public schools in Beijing]
Despite the recent introduction of LLMs over the past few years, AI has already begun to infiltrate labour market trends. Rising unemployment among entry-level, white-collar service jobs has become more widespread, suggesting that AI is already acting as a substitute for routine cognitive tasks. Understanding these shifts is important for shaping policies that maximise the benefits while mitigating risks, since AI does not automatically incubate positive progress, researchers commented.
The technology has the capability to both empower and harm. The solution is not a straight path, but rather, a shared journey, Marwala remarked, noting that the conference would continue to serve as an important platform to convene innovative ideas and draw solutions to meet these evolving challenges.


