SINGAPORE: Should we be able to sue AI chatbot firms for spreading fake news?

As reported in The Straits Times, Singaporean Peh Chwee Hoe sure does think so, as he brought attention to a significant issue concerning the accountability of AI chatbot firms in spreading misinformation.

His stance raises questions about the responsibility of these companies and the potential consequences of their actions.

Mr Peh raised his concern following a report he was “astonished” to read from The Straits Times: Ever looked yourself up on a chatbot? Meta AI accused me of a workplace scandal reported by Osmond Chia.

Mr Chia discovered that when he asked Meta AI’s chatbot about himself, it returned inaccurate information, linking his name to criminal charges he had reported on.

The situation highlights the risk of AI algorithms mistakenly associating individuals with untrue events, potentially damaging their reputation.

According to Mr Peh, “Imagine an employer being fed erroneous information linking a potential hire to unsavoury matters which have nothing to do with him other than, say, sharing the same name or as a result of the AI algorithm’s confusion.”

See also  Revolutionising learning: ChatGPT now enters Singapore school classrooms

Mr Peh calls for stronger laws to protect individuals and institutions from defamatory content generated by AI. The core concern is the lack of accountability in such cases.

Mr Peh noted that while affected individuals have the option to pursue legal action against the responsible tech firms, many may not be aware of the false information circulating about them.

This unfairly burdens individuals to constantly monitor their online presence to mitigate reputational harm caused by AI chatbots. I don’t see how it is fair to let these tech companies get away with reputational murder,” Mr Peh adds.

According to Mr Peh, “The onus shouldn’t be on people to have to google their name to ensure the tech bots haven’t maligned them.” /TISG

Featured image by Depositphotos