This is under way. Several governments that had signed up for high-level voluntary principles, such as the Organisation for Economic Co-operation and Development AI Principles, are moving to formulate and enact regulation.
Much of the summit was focused on long-term risks, including AI’s speculated existential threat to humanity. But the near-term risks – ranging from compromising privacy and infringing intellectual property rights, to spreading disinformation and perpetuating societal bias – are more relevant concerns for the vast majority of businesses pursuing AI investments.
These companies could benefit from regulation and policy frameworks surrounding AI, which not only protects consumers, but is critical to establishing sufficient trust in AI to sustain its spread and realise its potential.
The biggest technology companies recognise this. While they have traditionally been concerned about regulation holding back innovation, they understand that assuaging worries over AI is critical to its growth. The private sector may yet disagree with the specifics of regulations enacted by policymakers but Alphabet, Microsoft and ChatGPT developer OpenAI all support AI regulation in some form.
Not all governments are seeking to fast track the development and enactment of AI laws, however. Warning against stifling innovation, British Prime Minister Rishi Sunak asked: “How can we write laws that make sense for something that we don’t yet fully understand?”
Still, Southeast Asia’s CEOs are well aware of the downsides of AI, acknowledging that more work is needed to address risks from cyberattacks to disinformation and deepfakes. Two-thirds of those surveyed say the business community needs to focus on the ethical implications of AI, with the same proportion saying businesses are not doing enough to manage the unintended consequences.
Despite the nascent state of regulation, corporations are rightly prioritising AI investments: after all, the successful businesses will be those that move early to incorporate data techniques and AI into everything they do.
As highlighted by another recent EY survey, businesses investing in AI realise managing issues related to accuracy, ethics and privacy will require significant changes to their governance. But few have taken steps in that direction: only about a third of organisations globally even have an enterprise-wide governance strategy for AI.
Because AI is in its infancy, many companies lack the in-house capabilities to develop these governance processes, or have confidence their initiatives will comply with fast-evolving and complex regulatory requirements.
That’s a gap they need to close. Credible and effective AI governance will become an increasingly important driver of growth and competitive advantage, especially given the shift in public attitudes, with concern overshadowing excitement. Regulation is essential and can enhance trust in AI and its adoption. Companies have their part to play in building trust in AI. That starts with something as time-honoured as AI is innovative: good governance.
Patrick Winter, an accounting professional with more than 30 years’ experience, is EY Asia-Pacific area managing partner. The views in this article are solely those of the author