SINGAPORE: Singapore’s business landscape faces mounting risks as many companies remain unprepared for the integration of artificial intelligence (AI) into their operations, according to a new industry report. A significant number of firms are struggling to build internal expertise and integrate AI risk management into their core control frameworks.
Only 13% of organisations in Singapore are fully prepared for the adoption of AI, a figure revealed by the latest Cisco AI Readiness Index. As businesses across the island nation explore AI’s potential, the urgency of adopting robust risk governance becomes increasingly clear. The findings come amid rising reports of cyber threats, with 91% of firms experiencing identity-related breaches, many linked to multi-cloud environments and emerging generative AI (GenAI) tools.
A new white paper published by the Financial Services Information Sharing and Analysis Center (FS-ISAC), titled “Charting the Course of AI: Practical Considerations for Financial Services Leaders”, calls on firms to adopt a comprehensive “all-hazards” approach to managing the risks associated with GenAI. This approach, the report argues, should prioritise governance frameworks as organisations incorporate AI into their core operations.
The FS-ISAC white paper stresses the pressing need for financial institutions to manage operational, legal, and strategic challenges while simultaneously leveraging AI’s potential to transform productivity.
Despite AI’s capacity to boost efficiency, the report identifies several critical risks, particularly in the near term. Among these are the over-reliance on AI systems, the lack of mistake-tolerant frameworks, the potential for hallucinated or misleading AI outputs, and the growing phenomenon of “shadow AI”—unauthorised use of generative tools by employees.
A major concern highlighted in the white paper is the growing exposure to legal risks, particularly related to intellectual property (IP). The report warns of the risk of “IP laundering”, where AI-generated content unknowingly includes copyrighted material.
As global regulatory frameworks such as the European Union’s AI Act, the OECD guidelines, and Singapore’s own Monetary Authority of Singapore’s (MAS) FEAT principles continue to evolve, companies are expected to face increasing compliance challenges, particularly those operating across borders.
The FS-ISAC report lays out potential risks over varying timeframes. In the short term, AI is expected to augment roles rather than replace them. However, the medium-term outlook sees an increase in regulatory complexity, alongside shifts in the labour market, especially in areas like cybersecurity. Over the long term, the report warns of risks such as model collapse, market consolidation, and diminishing returns on AI investments as tools saturate and productivity gains plateau.
To help firms assess their preparedness for AI, the FS-ISAC report provides a set of eight critical questions. These questions focus on issues such as governance structures, training data management, employee safeguards for AI use, and strategies for handling long-term workforce impacts.
As generative AI technology develops at a pace that outstrips many organisations’ ability to adapt, FS-ISAC cautions that the costs of underestimating the associated risks could outweigh the benefits of rapid adoption. For Singapore’s financial sector, which faces an increasingly complex threat environment, AI implementation must be carefully balanced with strong oversight to ensure both resilience and public trust.
The warning is clear: without comprehensive risk management and governance frameworks, Singapore’s companies—especially those in the financial sector—risk exposing themselves to significant operational, legal, and strategic vulnerabilities as they rush to implement AI technologies.