Malaysia's First AI Office Established to Regulate Policy and Guide Innovation
|
NEWS
|
In December 2024, Malaysia’s Ministry of Digital announced the formal inauguration of the National Artificial Intelligence Office (NAIO). This move, aimed at positioning Malaysia as a regional leader in Artificial Intelligence (AI) technology and applications, is more than a regulatory step. The office seeks to drive AI investments, promote AI innovation domestically, foster internal and international collaboration, formulate robust policies, and support further governance and security developments. It's a proactive approach that could significantly benefit the region's AI industry.
In its statement delivered at the Malaysia International Trade and Exhibition (MITEC) in Kuala Lumpur, the NAIO outlined seven key deliverables from the office. These deliverables are listed as such: 1) AI Technology Action Plan 2026-2030; 2) AI Adoption Regulatory Framework; 3) Acceleration of AI Technology Adaptation; 4) AI Code of Ethics; 5) AI Impact Study for Government; 6) National AI Trend Report; and lastly 7) Datasets Related to AI Technology.
Increasing Momentum to Fill Regional AI Regulation Gaps
|
IMPACT
|
In 2024, Malaysia saw significant AI investments from major players like Google and Microsoft. Considering this, establishing the NAIO is a vital next step for the Malaysian government to foster an environment conducive to AI innovation and development within and across the region. As Southeast Asia (SEA) prepares for substantial investments in data centers and other AI-related infrastructure, the timing of the NAIO's launch is ideal. It aims to seize a leadership role in these developments and to formulate crucial policies that will regulate AI growth in the region. The NAIO's entry into the regulatory landscape of SEA will further support the ongoing trend of promoting responsible and ethical AI development, adoption, and usage.
This aligns with ABI Research's observation that the current AI regulatory space in SEA generally lacks robust policies and regulations to drive AI development and innovation. Regionally, the Association of Southeast Asian Nations (ASEAN) published a guiding principle of AI governance and ethics for the region, acting as a loose set of standards that remain guidelines, rather than regulations for ASEAN members to follow. On the national level, while Singapore and Malaysia have published their national AI development plans for the foreseeable future, other countries in the region have yet to publish a comprehensive plan for regulating and aiding the development of AI in their countries. Malaysia’s NAIO could be the catalyst that spurs the region’s governments to seriously consider developing national-based agencies that could aid development, spur innovation, and regulate the growing AI market in SEA.
AI Developers in SEA to Embrace Self-Regulation in Anticipation of Tighter AI-Based Policies
|
RECOMMENDATIONS
|
ABI Research believes that with the establishment of the NAIO, countries in the SEA region, such as Indonesia, Vietnam, Thailand, and others, will soon follow suit in launching nationally oriented AI offices in 2025. This will align with the region's interest of becoming an AI hub, with ASEAN and its member states developing a strong regional ecosystem to drive AI growth, development, and innovation.
AI software developers in Malaysia and the SEA region need to be mindful of potential increased AI regulations and ensure that AI applications are developed ethically and responsibly. Adopting standards aligned with the ASEAN Guide on AI Governance and Ethics recommendations should be foundational to development policies. However, beyond that, other key considerations and recommendations that AI innovators in SEA need to consider include:
- Intellectual Property (IP) and Privacy
- Challenges: A common pitfall of developing AI and its essential datasets is the exposure of proprietary and confidential data. AI’s web scraping capabilities and training dataset could unintentionally infringe on IP like copyrights, patents, and trademarks, possibly leading to legal issues with IP holders over the model’s training data or output similarities.
- Recommendations: AI developers should conduct risk-based assessments before starting any data collection, processing, or modeling. Additionally, in mitigating risk, they should ensure that they comply with the law in data acquisition, whether it involves permissions, licensing, or compensating individuals who own the IP.
- Malicious Use
- Challenges: AI’s rapid growth has spawned various malicious use cases, from AI-powered cyberattacks to data manipulation and poisoning to deepfake manipulations. Generative Artificial Intelligence (Gen AI) software without the proper safety rail guards coded in place will risk its product being manipulated and exploited for illegal and unethical activities, such as using deepfakes to socially engineer attacks.
- Recommendations: Enterprises developing Gen AI software or AI agents should establish and form a comprehensive roadmap for their product and a trust and safety team that consistently evaluates its usability and ability to reject undesirable prompts. A consistent effort in red-teaming and re-evaluation will inevitably raise the product’s efficiency and credibility, future-proofing its code and design from incoming governmental regulations.