Navigating Vietnam’s new Law on Artificial Intelligence

Navigating Vietnam’s new Law on Artificial Intelligence

Vietnam’s first-ever Law on Artificial Intelligence (AI) was passed on 10 December 2025 and will take effect from 1 March 2026, marking a milestone in the country’s digital governance and innovation strategy.

The timeliness and necessity of an AI law

Vietnam has become one of the few countries in the world to enact a dedicated AI law, comprising 35 articles and designed under the principle of “managing for development”. The law aims to strike a balance between risk control and innovation promotion, aligning with global standards while safeguarding national interests.

Dr Sreenivas Tirumala, a senior lecturer in IT and Cyber Security at RMIT University Vietnam, notes that the rapid economic development of Vietnam creates an urgent requirement of integrating AI into agriculture, manufacturing, and services sectors. Vietnam’s national strategy on research, development, and application of AI also outlined a clear direction to make Vietnam a global AI hub by 2030.

Within this context, the new AI Law will boost investor confidence. “With increasing foreign investments, the initiative of the AI law will give overseas investors more confidence in the data handling and liabilities related to the use of AI in Vietnam. The AI Law also puts Vietnam ahead of other ASEAN countries with regard to legislation,” Dr Tirumala says.

Dr Jeff Nijsse, a senior lecturer in Software Engineering at RMIT Vietnam, emphasises the law’s relevance: “A dedicated AI law is necessary because it specifically addresses urgent, modern threats like deepfakes and manipulation that existing laws do not cover. The AI Law aims to consolidate everything AI under a new umbrella law and solve overlaps with existing legislation like the Law on Digital Technology Industry.”

A risk-based approach: Balancing compliance and growth

The law introduces a three-tier risk classification for AI systems, aligning Vietnam with leading global regulatory models like the European Union’s AI Act. AI systems are categorised based on their potential impact and risk levels, with corresponding legal obligations.

  • High-risk systems are those that could cause significant harm to life, health, legitimate rights and interests, or national security (e.g., medical diagnosis, financial services).
  • Medium-risk systems are those that have the potential to confuse, influence, or manipulate users because users are unaware of their interaction with the AI systems or AI-generated content. (e.g. many chatbots).
  • Low-risk systems are the those that do not fall under the above categories.

High-risk systems will be subject to stricter requirements on data, auditing, supervision, and human intervention, while low-risk systems face minimal oversight.

Child interacting with robot The AI Law categories AI systems based on their potential impact and risk levels. (Photo: Pexels)

Dr Nijsse adds that the law attempts to be future-proof by assessing outcomes rather than companies, models, or products, cautioning that this future-proofing tends to fall down over time because it is hard to predict how people will use new technology.

“Laws tend to lag innovation, and this is a feature, not a bug. For example, the law says that systems that directly interact with humans such as chatbots are medium risk. However, we just don’t know what benefits this might be limiting in the future. Businesses and entrepreneurs may avoid medium or high-risk categorisations.”

Meanwhile, Dr Tirumala points out the compliance challenge as the law takes effect on 1 March 2026. Legacy systems in healthcare, education and finance, which were put into operation before the law’s effective date, have 18 months to meet the compliance obligations.

“That’s a quite short window for small and medium-sized enterprises and challenging for businesses using high-risk AI systems,” he says. “Increasing AI awareness among businesses and providing a simple compliance process must be considered before enforcing the new law.”

The broader picture: Infrastructure, workforce, and incentives

Beyond regulation, the law sets out ambitious plans for national AI infrastructure with a national database for AI. It also introduces a National AI Development Fund to support startups and small and medium-sized enterprises, while allowing for a controlled sandbox to test sensitive AI solutions.

Notably, the AI Development Fund gives startups access to vouchers that can be used for high performance computing services, directly cutting R&D costs. According to Dr Nijsse, training new large language models takes a lot of resources, and there is particular focus on training foundational models that serve national interests.

“For example, startups training a Vietnamese language model or focusing on Vietnamese data can use vouchers to access the Viettel cloud or VNPT cloud GPUs. This cuts the high costs of model development and keeps the data local,” Dr Nijsse points out.

Dr Sreenivas Tirumala and Dr Jeff Nijsse Dr Sreenivas Tirumala (left) and Dr Jeff Nijsse (Photo: RMIT)

Meanwhile, the sandbox can help innovation around legal grey areas. “One example is training AI models with medical data to help speed up workflows and check doctors’ work. Present laws require the practitioner making the diagnosis to be human, so it gets ambiguous when a doctor begins consulting an AI model,” Dr Nijsse says. “Self-driving vehicles present another example. Current traffic laws do not account for non-human drivers. Vehicles with partial automation are allowed, but more advanced, fully autonomous vehicles are not deemed safe. Such applications could benefit from an AI sandbox.”

Human resource development is another priority. The law requires integration of AI basics into general education and encourages universities to expand AI-related programs, aiming to build a competitive workforce.

“Universities need to integrate AI fundamentals and ethics into curricula across disciplines, not just tech majors. It's important to teach students about responsible application of AI based on social and cultural norms along with regulations and laws,” Dr Tirumala says. 

“Moreover, building partnerships with industry for hands-on projects and internships will be key to creating an AI-ready workforce. This approach ensures graduates can meet compliance requirements while driving innovation.”

Beyond a legal milestone, the Law on AI signals Vietnam’s clear intent to become an AI leader. By combining rules with incentives for innovation, the country is laying the groundwork for a competitive AI ecosystem. The next challenge is execution: turning policy into practice so that businesses, educators, and innovators can make AI a driver of sustainable growth.

Story: Ngoc Hoang

Masthead image: HM – stock.adobe.com | Thumbnail image: AddMeshCube – stock.adobe.com

Related news