“When it comes to increasing the AI workforce, it’s the right thing to do: first you set up the governance mechanism, then you start to grow – to triple the pool of AI practitioners, as is the plan, for example,” said Öykü Işık, Professor of Digital Strategy and Cybersecurity from IMD Business School in Switzerland.
“The result is a legal framework that is situation-agnostic and can certainly serve as a model for other countries, in the region and beyond,” he said.
Under its AI strategy, Singapore has also pledged to increase government incentives for the sector, including by backing accelerator programmes for AI start-ups and encouraging companies to set up AI “centres of excellence”.
Having a deep pool of AI expertise working within a carefully constructed governance framework will be essential in combating the rise of bad actors who will utilise AI technologies indiscriminately for power and profit, analysts say.
Already, the launch of AI-based tools has multiplied the number of cyberattacks in Asia, according to KPS Sandhu, head of global strategic initiatives at the Tata Consultancy Services cybersecurity practice.
“Asia has been in the crosshairs of attackers primarily because I think they’re rapidly evolving digital economies and technology is coming up and proliferating quite fast as a result,” he said.
Phishing kits – one of the main tools for cyberattackers – are now being offered for as little as US$10 on the dark web, Sandhu added, referring to a subset of the internet that is not visible to search engines and requires the use of an anonymising browser.
Even Singapore is vulnerable. Despite its status as one of Asia’s premier tech hubs, nearly half of small businesses in Singapore believe they are vulnerable to cyberattacks, according to an annual Asia-Pacific Small Business Survey.
In addition to better governance, organisations needed to equip themselves with better technologies, Sandhu said. “You need to fight fire with fire.”
International cooperation to fight cybercrimes and share learning experiences also needed to be forged, though such efforts had already improved, he added.
Fighting phishing
The need for better ways to combat such crimes has become increasingly apparent due to the growing number of cyber-scamming syndicates operating across Asia.
On December 8, Britain announced sanctions on actors in Cambodia, Myanmar and Laos for coordinating a multi-country scam operation. The nine people and five entities were sanctioned for “links to forced labour schemes” and trafficking people into “online scam farms”.
Such operations have taken roots in recent years in Cambodia, Myanmar and Laos, where they flourish amid endemic corruption and weak law enforcement, spreading their tentacles to other countries in the region as well.
Alarm as Singapore renters lose over US$1 million to fake property agents
Alarm as Singapore renters lose over US$1 million to fake property agents
While Singapore’s AI strategy did not contain strict guidelines relating to the use of AI in phishing attacks, the government was likely “aware of the risks of bad actors using AI such as AI-powered phishing attacks”, said Nicholas Lauw, partner at RPC Premier Law.
“The fact that NAIS specifically states that AI should be used responsibly means that it should not be seen as a blank cheque that bad actors can … use AI for criminal means,” he said.
Some countries in the Asia-Pacific region could use Singapore’s governance as a benchmark, Lauw added.
Other industry executives said Singapore should provide assistance to other countries as cyber criminals become increasingly sophisticated.
“We were able to detect a phishing email before just by the poor construction of the email, but these days, it’s so perfectly crafted,” said Vince Chew, chief operating officer and chief information security officer of Evvo Labs.
Zeroing in on the cause – human error – is winning half the battle, experts say.
“Singapore’s government should encourage enterprises and institutions to share intelligence. The collaboration between financial institutions and telcos, as proposed in the Shared Responsibility Framework, can facilitate the sharing of AI and machine learning technologies to identify and block sophisticated phishing attempts,” Chew said.
That would help facilitate the creation of regional standards for AI’s ethical use, transparency, and effectiveness in detecting and mitigating phishing attacks, he added.
Teamwork could work well in the region, with each country bringing its “own unique tech and cyber issues”, he said.