It took Hayao Miyazaki decades to build the hand-drawn world of Studio Ghibli - its softness, symbolism, and soul. In 2025, a machine did it in seconds. With one prompt that was used by over one million people within the first hour, it was suddenly possible to generate a “Ghibli-style” world. The filter quickly went viral around the world, gaining particular traction in countries such as South Korea and India, flooding social media with dreamlike, AI-rendered versions of everyday life. The aesthetic? Big eyes, soft lighting, whimsical architecture - the unmistakable influence of Japanese animation.
But behind the dreamy visuals lies a deeper question: when visual culture becomes automated and aestheticised by machines, what is lost and who gets to shape the aesthetics we live with? Ricardo Arce and Christian Berg, lecturers from the Digital Media program in the School of Communication & Design at RMIT Vietnam, argue that this shift raises urgent questions about authorship, consent, and cultural agency.
According to Arce, Program Manager of RMIT Vietnam’s Bachelor of Design (Digital Media), the rise of AI-generated imagery marks a turning point for visual storytelling. With over 30 years of experience in the animation industry, he recognises AI as a powerful tool for ideation, collaboration, and rapid production. But he draws a clear line when it comes to uncritical mimicry.
“When everyone starts generating images in the same style, it flattens visual expression,” Arce said.
He describes the current wave of AI-generated content as a form of inaneness - a saturation of style that leads to the erosion of symbolic value.
Nowhere is this more visible than in the widespread use of the Ghibli filter. Despite its visual charm, the filter draws heavily from a deeply personal and culturally rooted aesthetic pioneered by Hayao Miyazaki - who famously rejected AI art as “an insult to life itself”.
But the legal and ethical questions go further. If an AI model is trained on thousands of frames from Miyazaki’s films, does the resulting aesthetic belong to the algorithm or to its source? And where is consent left in this process? In the age of data scraping, the absence of attribution is not a glitch. It’s a feature.
The problem isn’t just artistic. Berg, Associate Program Manager in Digital Media, who is doing his PhD research on AI and photography, points to the troubling ways in which AI-generated imagery is being co-opted for political messaging and social manipulation.
“We’ve seen how AI-generated memes are now being used by ultra-right-wing groups - and even, at times, by public institutions - to mock immigrants, people with disabilities, and LGBTQ+ communities,” he said.
These visuals may seem humorous or harmless on the surface, but their scale and viral reach give them the power to dehumanise and distort. Repetition, speed, and emotional impact make them an effective form of digital soft propaganda.
This is how so-called AI slop - content that is mass-generated, low-context, and stylistically overfamiliar - becomes a vehicle for visual manipulation: easy to produce, emotionally charged, and often stripped of nuance or accountability.
At the same time, the aesthetics themselves are converging. When AI platforms consistently surface dreamy, nostalgic, or hyper-stylised imagery, they don't just shape machines - they begin to condition human taste. Aesthetic sameness becomes the default, quietly eroding diversity in perception, memory, and imagination.
Yet both educators agree: AI is not inherently the enemy. Arce sees its potential to support ideation and accelerate and improve production workflows in animation and design. In fact, Berg uses it in his own research - but very much on his own terms. Rather than relying on global datasets trained on billions of images scraped from the internet, he feeds models like Stable Diffusion and DreamBooth with his own photographic archive.
This practice is a way for artists to retain authorship in an age when style is often outsourced to algorithms.
“Instead of letting AI decide how your work should look, feed it your own perspective. Use it as a way to reflect, not a blueprint,” Berg said.
Meanwhile, Arce warns that overreliance on AI risks diluting the symbolic depth and cultural specificity of visual storytelling.
“The creative process is not only about speed or style. It’s about meaning and meaning takes time,” he said.
For Vietnamese creators, this presents a valuable opportunity. Instead of defaulting to global visual trends, they can shape the tools to reflect the richness of their own heritage - from traditional textures and local symbolism to narrative structures rooted in place. By doing so, AI becomes a means of cultural extension rather than imitation.
In the classroom, both Arce and Berg emphasise that higher education must go beyond teaching technical proficiency. It must encourage students to gain agency and ask better, deeper questions that challenge not just how tools are used, but why. What motivates the choice to generate an image with AI? Where did the data behind the output come from? Is the result truly an expression of the creator’s voice, or simply the algorithm echoing what it has learned? And perhaps most crucially: what do we lose when we stop using our hands, our memory and emotions, but also the in-between processes of the creation act?
These are not abstract concerns. In many cases, users upload personal photos to AI-powered apps without reading the fine print. Faces, locations, and aesthetic preferences are quietly transformed into training data. In a world driven by free prompts and viral filters, users are not only consumers, but they also become the raw material.
Today’s visual culture is already a hybrid space, shaped by constant collaboration between people and machines. From cameras and editing software to motion capture and VFX, technology has always been part of the creative process. AI is simply the next chapter. But it is a chapter that demands more intentionality.
The next generation of artists must both master the machine and know when to put it down. For emerging Vietnamese creatives, the challenge is not whether to use AI, but how to use it without losing the integrity of their own vision.
Because when everyone’s work begins to look the same, perhaps the boldest act of creativity is to refuse the template. AI has become a central part of the creative ecosystem, just as cameras, software, and the internet once did. But today, the stakes are no longer just technical - they are cultural.
“There’s a difference between prompting and creating,” Berg reminds us. In other words, it’s not just about giving instructions to a machine. It’s about shaping a message that means something.
To protect creative diversity, we must resist the seductive ease of machine-generated sameness. That doesn’t mean rejecting AI altogether - it means using it critically, intentionally, and in ways that honour our unique perspectives.
Because the future of visual culture doesn’t belong to the machine. It belongs to those who shape the machine to see the world in a creative way.
Story: Quan Dinh H.
Masthead image: Sikov - stock.adobe.com
Thumbnail image: ipopba - stock.adobe.com
From Studio Ghibli filters to viral memes, AI-generated visuals are transforming what we see, and reshaping how we imagine beauty, identity, and truth. What’s at stake for creative culture?
Three RMIT Vietnam lecturers discuss how AI-driven technologies challenge the personal warmth and human connection that define Vietnamese hospitality.
The use of celebrities and influencers in social media marketing is increasing. However, this has come with cases of false advertising where consumers bear negative consequences.
The adjustments in the US tariffs are poised to reshape Vietnam’s textile, clothing and footwear industry, prompting businesses to reassess their strategies and operations.