Artificial Intelligence: Prospects and Pitfalls

Be informed. Join the conversation.
Search result for ""
Photo Credit: ©Alexander Supertramp / Shutterstock
Artificial Intelligence: Prospects and Pitfalls
Fitriani, PhD
Special Project Researcher on Digitalisation, Centre for Strategic and International Studies, Indonesia
1 Feb 2024
Labour and Future of Work

Artificial intelligence (AI) is defined as the ability of smart computers and programs to perform tasks that would normally need human intellect. AI has enormous potential in various areas, including energy, agriculture, health, retail, and logistics. The advantages of using AI may include higher productivity, better decision-making, better client experiences through optimising energy use, assisting with precision information, and simplifying supply chains.

everal Southeast Asian countries have invested in developing and using AI, albeit in varying degrees. For example, Singapore invests heavily in the development of AI for its citizens, especially for improving clinical diagnoses, scam detection, and managing government affairs. The country is also posturing itself as the region’s AI hub. Other countries in the region have different applicability of AI. For example, Viet Nam deploys the technology to increase tax compliance; Thailand utilises AI to find traffic-jam solutions; and Indonesia applies it in health and agriculture. A 2020 study by Kearney and Edbi titled “Racing toward the Future: Artificial Intelligence in Southeast Asia” showed that 80 per cent of the region has begun adopting AI in private companies and research institutions. The 2023 publication by Albert Rapha noted that AI has the potential to increase ASEAN economy between 10 to 18 per cent, or close to 1 trillion US dollars by 2030.

AI technologies—which cover big data processing, machine learning, robotic automation, speech recognition, chatbot and speech recognition—can provide users with economic benefits and ease people’s lives, but it also poses potential risks. These risks include job displacement, issues of privacy, inadequate cybersecurity measures that may expose AI systems and make them vulnerable to attacks, ethical concerns regarding bias in decision-making algorithms due to limitations in data-pool or coding. Additionally, AI misuse is possible, such as when malicious actors utilise the technology for mass creation and dissemination of disinformation, leading to social unrest.

Specific concerns raised include the use of doctored images, and videos and other related AI-generated fake content that targets information surrounding general elections, which has the potential to influence the result. The use of AI in editing photos and videos to produce deepfakes is concerning because they look vivid, deceptively real, and are difficult to verify, thus potentially damaging people’s credibility and reputation.

The rapid digitalisation in Southeast Asia following the COVID-19 pandemic, which has led to increased use of social media in the region, has raised concerns about the spread of false news. With an estimated 68 per cent of the overall population using social media and young people aged 16 to 24 spending more than 10 hours each day on the internet, the region has been at risk of being a breeding ground for disinformation and misinformation. As such, social media has become an increasingly essential tool in political campaigns. Politicians and political parties in the region are reaching out to share their messages and mobilising support using platforms like Facebook, Twitter, and TikTok.

Businesses also reap benefits from the platforms to sell products and services. Nevertheless, with social media, it is now easier to distribute disinformation, and AI simplifies the manufacture and spread of false news and propaganda, which may be used to sway public opinion and impact societies. Additionally, AI algorithms enable the creation of bots and support buzzers—account operators with thousands or even millions of followers who are paid to spread a certain message—which countries in the region worry about during elections.

The risks presented by AI development, which include disinformation and misinformation, cybersecurity attacks, and societal disturbances, call for a multidimensional approach. While it is impossible to foresee which actions will be beneficial, there are strategies that governments can adopt to mitigate risks. This article suggests seven policy recommendations, namely, (i) increase digital literacy, specifically in AI and related concerns, (ii) multistakeholder collaboration, (iii) creation of ethical framework and norms, (iv) ensuring adaptive regulations, (v) implementing regulatory oversight, (vi) investment in research and development of AI, and (vii) international collaboration.

First, enhancing digital literacy and awareness, especially focusing on AI and its related issues, can be directed towards the general public, politicians, and specific groups with significant outreach and societal impact. This action allows societies to recognise and reduce the risk of AI development and implementation. For example, the 2022 ASEAN Training-of-Trainers Program led by the Senior Officials Meeting on Education emphasised education as key to media literacy and countering disinformation. The programme connects with the ASEAN Work Plan of Action to Prevent and Counter the Rise of Radicalisation and Violent Extremism (Bali Work Plan); the Framework and Joint Declaration to Minimize the Harmful Effects of Fake News; and the ASEAN Declaration on Culture of Prevention for a Peaceful, Inclusive, Resilient, Healthy and Harmonious Society. The programme also produced an educator’s toolkit that is disseminated widely.

Second, multi-stakeholder and multisectoral collaboration, involving governments, research organisations, business and experts in the AI sector, paves the way for discussing each other’s activities and can be useful in tackling AI issues as a group as well as charting a blueprint for sectoral actions. One potential outcome from this collaboration is the implementation of a joint “verification and certification” process for AI, involving multiple actors, to ensure the safety and security of AI systems.

Third, ethical frameworks and norms to govern AI development and deployment need to be established to regulate and ensure proper usage, provide limits, and penalise malicious actors. Values such as safety, justice, openness, and accountability can be adopted within these frameworks.

To ensure that the advantages of AI exceed the risks, Southeast Asian countries are developing the ASEAN Guide on AI Governance and Ethics that aims to provide safety measures on the burgeoning technology. The process, which began in February 2023 and is led by Singapore as the chair, will be launched at the ASEAN Digital Ministers’ Meeting in the following year.

Fourth, ensuring the implementation of adaptive regulation is important given the fast pace of AI and technology development. Government rules and standards must be adaptable and capable of evolving as technology progresses to capture the economic benefits that AI brings. Moreover, regulations must be reviewed and updated periodically to guarantee their effectiveness. Understandably, balancing innovation and regulation is a difficult task, as excessive and rigid regulation may impede innovation, while little regulation might result in uncontrolled hazards. To strike a balance, governments, the corporate sector, academia, and civil society must work together on a continual basis. In 2020, ASEAN issued Managing Technology’s Implications for Work, Workers and Employment Relationships, which includes rudimentary examination of the application of robotic and automation in Southeast Asia’s industries. The report notes that the use of AI and cloud computing may compel business outsourcing and disturb labour market, hence proposes “robot tax” to gain fund for expanding social welfare programmes. Although the report does not discuss regulating the technology, it is the first step for the region to raise concern on how AI influences labour condition in the region.

The risks presented by AI development, which include disinformation and misinformation, cybersecurity attacks, and societal disturbances, call for a multidimensional approach.

Fifth, the governments’ capacity to conduct regulatory oversight hinges on the recognition that policies and regulations are likely to be influenced by national values, political, economic, and cultural considerations. Governments must aspire to have the capacity to measure the conduct of AI and enact and implement regulations governing AI research, development, and application. This covers rules governing AI safety, data privacy, and cybersecurity.

Sixth, governments need to invest in AI research and development to foster the development of safe and useful AI technology. This involves financing for AI ethics, security, and risk assessment research. Governments must commission independent audits and studies of AI systems, specifically focusing on their potential hazards. These audits may assist in identifying and addressing problems.

Last but not least, the seventh recommendation stresses the importance for governments to actively pursue international cooperation given the inherently transboundary nature of technology and AI. Mitigating the risks associated with AI requires collaborative international efforts, trust, and capacity. To maintain global security and ensure the safe and responsible use of AI, governments are advised to cooperate in developing common standards, exchange best practices, and coordinate activities in capacity and trust-building. If deemed necessary, governments may consider creating international treaties or accords.

Within the Economic Community of ASEAN, standards and conformance have been discussed to facilitate trade, ease transparency, harmonise regulatory regimes, and undertake technical cooperation. A similar approach can be done on AI technology. For example, governments can explore the possibility of establishing agreements similar to the Agreement of Harmonized Electrical and Electronic Equipment (EEE) Regulatory Regime in 2005, which has been adopted and updated by the Member States.

The use and development of AI are indeed inevitable, holding the potential to bring about significant benefits to society. Ensuring that AI works for and not against humanity requires several key considerations. One of these is upholding the principles of transparency and fairness since AI tools are not universally accessible. Technological and economic disparities often allow only the privileged to benefit from AI. The United Nations Educational, Scientific and Cultural Organization (UNESCO) has raised profound ethical concerns, emphasising that biases embedded in AI systems may contribute to climate degradation, endanger human rights, and exacerbate existing disparities, causing greater damage to already marginalised communities. In response to these challenges, UNESCO formulated the Recommendation on the Ethics of Artificial Intelligence in 2021, underscoring the fundamental principle of protecting human rights and dignity in the oversight of AI systems. The recommendation has been adopted by all 193 UN member states.

The ASEAN Member States have been taking various initiatives to embrace AI and technology through their issuance of national strategy. Nevertheless, the level of engagement and investment in AI varies from one Member State to another, as countries have different priorities and approaches toward this technology. To cultivate AI’s potential in the region, it is crucial for Member States to work collectively and collaboratively. Sharing best practices, promoting cross-border cooperation, and developing a regional approach to AI policy and regulation can help ensure that AI technology benefits the entire region, and contribute to building capacity and trust in AI development.

The views and opinions expressed belong solely to the author and do not reflect the official policy or position of ASEAN.

Categories: