SINGAPORE: In response to recent concerns about the accountability of artificial intelligence (AI) chatbot firms in spreading misinformation, Singapore’s Ministry of Communications and Information (MCI) has confirmed that current laws will apply if AI is used to cause harm.

Such harm includes spreading falsehoods, according to a Straits Times forum letter written by MCI Senior Director (National AI Group) Andrea Phua. Ms Phua was responding to a Singaporean’s call for stronger laws to protect individuals and institutions from defamatory content generated by AI.

In a letter published by the national broadsheet, Mr Peh Chwee Hoe noted that while affected individuals have the option to pursue legal action against tech firms spreading misinformation about themselves, many may not even be aware of the false information circulating about them.

This unfairly burdens individuals to constantly monitor their online presence to mitigate reputational harm caused by AI chatbots, he argued. “I don’t see how it is fair to let these tech companies get away with reputational murder,” Mr Peh said.

See also  China AI startups target Singapore as prime destination for global expansion

He added, “The onus shouldn’t be on people to have to google their name to ensure the tech bots haven’t maligned them.”

Thanking Mr Peh for airing his concerns, Ms Phua stressed Singapore’s commitment to ensuring the responsible and ethical deployment of AI. She emphasized the multifaceted approach undertaken by the government to mitigate AI-related risks, including the development of the open-source AI Verify testing toolkit and the formulation of the Model AI Governance Framework.

This framework, she explained, delineates key principles expected in the design of AI systems, emphasizing their human-centric, explainable, and transparent nature.

Ms Phua also revealed that the Government is set to launch of a framework specifically tailored to address generative AI systems. She said, “We will soon launch a framework specifically to cover generative AI systems, as they have gained wider adoption. Among other dimensions, it will call for greater accountability along the AI development chain, including a clearer allocation of responsibilities and stronger safety nets.”

See also  Business leaders' optimism on organisational future wanes amid rising cybersecurity concerns

As for the concerns regarding legal recourse, Ms Phua emphasized the continued relevance of existing laws and regulations in cases of AI-induced harm. She reaffirmed the government’s commitment to regularly review and update legislation to address evolving technological landscapes and said:

“Harms like workplace discrimination and online falsehoods can already happen without AI. If AI is used to cause such harms, relevant laws and regulations continue to apply.”

Calling for collective responsibility among AI stakeholders, urging developers and users alike to prioritize the public good in AI development and utilization, Ms Phua said: “We are committed to ensuring that AI development serves the public good. We cannot foresee every harm, but an agile and practical approach can lower the risks and manage the negative effects of AI development.”

TISG/