Despite being in the early stages of its development, the integration of artificial intelligence (AI) into business is becoming more visible daily, and with it are the consequences of using the emerging technology, says Vincent Mossfield, regional director of liability risk practice at WillisTowersWatson (WTW). He was speaking at a France Macau Chamber of Commerce (FMCC) breakfast discussion on Wednesday morning at the Sofitel hotel.
“Liabilities follow technologies,” Mossfield noted, noting that both individuals and companies can be exposed to risks that might offset any potential efficiency gains that chatbot-like technologies such as ChatGPT and DeepSeek can offer. Among them are hallucinations, which occur when large language models (LLM) generate inaccurate information but utilize those results as factually sound statements.
[See more: DeepSeek is rushing to release its next AI model]
“If something were to go wrong, who is ultimately responsible, the user or the software?” Mossfield asked the attendees.
Taking a proactive approach to AI risks
While the world can play catch up with the new technology, the guardrails that control it have not, Mossfield warned. That gap exposes businesses to a litany of uncertainties, which not only include cyber related hazards, but also more established areas such as contracts and products unable to respond to security breaches related to AI.
Citing the lack of a globally accepted best practice for mitigating AI-related problems, Mossfield advises executives to take on a pre-emptive approach by implementing a recognized risk management system, such as ISO/IEC 42001. Doing so would ensure responsible development for AI-based products and services, while simultaneously informing customers that one’s business is being vetted by an external third party on a regular basis.
[See more: Scientists took years to solve a problem that AI cracked in two days]
“Given the problems that can arise, one needs to know whether there is a mechanism in the software that allows for a human override,” he says, underscoring that without one, companies are vulnerable to algorithms with hidden errors that can cause serious fiduciary and litigious problems down the road.
Among notable examples of contract and data liability, Mossfield highlighted the recently dismissed case against LinkedIn which alleged that the business networking group had used customer data for its AI models without permission. In other examples, software errors led to significant material losses, with the victims, in Mossfield’s view, unfairly compensated.

It is inevitable that disputes will reach this side of the world, commented Carlos Eduardo Coelho, a partner at Macao based law firm MdME, who spoke earlier to Macao News.
Coelho echoed Mossfield’s point that while liability risk includes product liability claims, copyright protection, and AI autonomy as a defence, without a precise language, certain industries, like insurance companies, could find themselves segregating AI related risks and billing to enhance clarity.
[See more: Experts push for responsible development of ‘AI consciousness’]
Mossfield reiterated that his presentation represented a high-level overview of the AI risk scenarios. While Asia has not yet experienced any immediate consequences, the situation remains fluid, and the future existential risk is difficult to quantify, making it hard to legislate now.
He concluded by expressing confidence that AI will inevitably benefit businesses and help them become more profitable. But he called for an effective roadmap to help prepare for any unforeseen problems that may arise, stating that the best way to safeguard AI at this juncture is to institute sound human oversight.