Skip to content
Menu
Menu

AI ethics in the workplace: beyond education and data transparency 

Industry experts at an FMCC gathering recommend companies be forthcoming with their use of autonomous decision-making programmes and explain how customer data is being collected
  • Contextualising ethical AI challenges can be a useful tool to educate system users, ensuring they understand the implications and responsibilities

ARTICLE BY

PUBLISHED

ARTICLE BY

PUBLISHED

UPDATED: 02 May 2025, 12:44 pm

Swift progress achieved by artificial intelligence (AI) has presented business executives and risk officers with a new set of challenges, pinning innovations aimed at enhancing operational productivity against regulations designed to safeguard public interest.

Central to the France Macau Chamber of Commerce (FMCC) discussion held at the Sofitel on Wednesday morning was identifying the immediate priorities not just for stakeholders, but for those that uphold AI use, noted moderator Jérémy Artan de Saint Martin, managing director at iExcel Technologies, who led a panel of AI experts that included academics, legal consultants, and tech entrepreneurs. 

Developers and proponents behind AI tend to lean towards open innovation that advocate for self, or light-touch regulation, opined Stéphane Monsallier, founder and CEO of System in Motion, stating that companies should focus on identifying what type of AI works best for their organisation rather than staying current with the latest system releases.

But with the growing adoption of autonomous decision making, the invited experts discussed how business managers can endorse responsible AI use amid rising concerns of possible exploitation, agreeing that a broad, risk-based approach may not be enough to hold up trust with any AI system. 

[See more: The only safe jobs in banking right now are AI roles]

Developing an understanding of the various ethical risks associated with AI requires a public awareness initiative that recognises the technology’s full potential, both the positive as well as the harm it can do, commented Dr. Serge Stinckwich, head of research at the United Nations University Institute, a Macao-based think tank.

While few consider how AI models are created, the public’s understanding can change when these ethical challenges are contextualised, he explains. 

Just as our consumption behaviour is altered upon discovering that products may come from unethical sweatshops, our views can shift when learning that certain AI models might have been built from stolen data sources or “scraped” – a practice that involves extracting data from various programmes. This can rewire our perspective that system developers may not necessarily be technological innovators, but rather just data or digital sweatshops. 

Admit when you’re a robot

Experts discussing the ethical implications of AI at the FMCC gathering earlier this week
Experts discussing the ethical implications of AI at the FMCC gathering earlier this week

The speakers recommended executives be open about their AI use when customers are engaged with an automated decision-making system, with Monsallier lamenting that whenever a chatbot disguises itself as a human without informing the client, informational trust is lost. 

Panellist Dr. Sara Migliorini, an assistant professor of global legal studies at the University of Macau’s Faculty of Law, concurred, arguing that the most effective policy is one that incentivises transparency, advocating the use of game theory and implementing reasonable action to ensure best practices are adhered to mitigate future liabilities. 

Following a framework like the ISO/IEC model ensures that organisations are addressing pressing challenges and exercising caution in line with the industry, Migliorini says. 

[See more: Brazil hopes to catch up with China, the US in artificial intelligence]

Although the panel disagreed with philosophical concepts of ethics when applied in various situations, they agreed that it is imperative for senior executives to understand when AI is being deployed and adopt a readiness posture for an evolving regulatory landscape. 

A company that does not formalise any AI governance or oversight monitoring risks a data leakage when an employee brings in their smartphone to work, Monsallier said, adding that the problem doesn’t stop there. 

Should the employee use an external AI system and then pass on hallucinated information to a customer, the results can be catastrophic, one that can discredit everyone’s reputation, and is almost impossible to recover from, he says. 

UPDATED: 02 May 2025, 12:44 pm

Send this to a friend