Researchers at the Tokyo University of Science have developed another technique that skips retraining altogether. Instead, their method tweaks how the model responds to prompts, allowing it to "unlearn" certain facts while keeping the rest of its knowledge intact to selectively forget unnecessary or sensitive data.
New approaches to help AI forget include federated learning, which keeps data on users' devices instead of storing it on a central server, and differential privacy, which adds random noise to the data to protect privacy while still allowing the AI to learn useful patterns.
The goal across all these efforts is the same: to give users more control over what AI remembers and to bring real data privacy closer to reality.
Why this matters for Vietnam
In Vietnam today, people are using online transactions daily, from ordering a meal to paying utility bills. AI is mostly quietly working in the background. It collects and processes our names, addresses, payment card details, and even our medical records. But when that information is misused or accidentally leaked, the consequences can be devastating, like lost money, damaged reputations, and deep emotional harm.
Vietnam’s new Law on Personal Data Protection (effective from 1 January 2026) is a step in the right direction, but laws alone can’t protect us from every risk. If AI can’t truly “forget” what it has learned, we risk building systems that carry fragments of our private lives forever.
Trust isn’t built on promises but on proof. If Vietnam wants a digital future people can believe in, it must ensure AI can forget as well as it learns. It’s how we protect privacy and build systems that serve people, not just data.
Story: Dr James Kang, Senior Lecturer in Computer Science, School of Science, Engineering & Technology, RMIT University Vietnam
Thumbnail image: Elnur – stock.adobe.com | Masthead image: tippapatt – stock.adobe.com