Dr Nijsse adds that the law attempts to be future-proof by assessing outcomes rather than companies, models, or products, cautioning that this future-proofing tends to fall down over time because it is hard to predict how people will use new technology.
“Laws tend to lag innovation, and this is a feature, not a bug. For example, the law says that systems that directly interact with humans such as chatbots are medium risk. However, we just don’t know what benefits this might be limiting in the future. Businesses and entrepreneurs may avoid medium or high-risk categorisations.”
Meanwhile, Dr Tirumala points out the compliance challenge as the law takes effect on 1 March 2026. Legacy systems in healthcare, education and finance, which were put into operation before the law’s effective date, have 18 months to meet the compliance obligations.
“That’s a quite short window for small and medium-sized enterprises and challenging for businesses using high-risk AI systems,” he says. “Increasing AI awareness among businesses and providing a simple compliance process must be considered before enforcing the new law.”
The broader picture: Infrastructure, workforce, and incentives
Beyond regulation, the law sets out ambitious plans for national AI infrastructure with a national database for AI. It also introduces a National AI Development Fund to support startups and small and medium-sized enterprises, while allowing for a controlled sandbox to test sensitive AI solutions.
Notably, the AI Development Fund gives startups access to vouchers that can be used for high performance computing services, directly cutting R&D costs. According to Dr Nijsse, training new large language models takes a lot of resources, and there is particular focus on training foundational models that serve national interests.
“For example, startups training a Vietnamese language model or focusing on Vietnamese data can use vouchers to access the Viettel cloud or VNPT cloud GPUs. This cuts the high costs of model development and keeps the data local,” Dr Nijsse points out.