RMIT experts flag security and mental health risks from unverified content

RMIT experts flag security and mental health risks from unverified content

Experts in information technology, psychology and communication from RMIT Vietnam share their perspectives on the risks associated with consuming and spreading unverified information online.

Many curiosity-baiting posts can hide malware, expose users to data theft, or result in financial loss through cyber-enabled scams. How would you assess the risks these cases pose to online safety?

Dr Sreenivas Tirumala, Senior Lecturer in IT at RMIT Vietnam 

The digital landscape of Vietnam has become double-edged sword: while increased digital competency drives the economy, advanced cyber-scams are on the rise. 

In the third quarter of 2025, online scams continued to surge, with nearly 4,000 phishing domains and 877 fake brand websites detected - around 300 per cent increase compared to the same period in 2024. The number of stolen personal accounts climbed to 6.5 million, up 64 per cent from the previous quarter. 

“Curiosity-baiting” - posts like sensational headlines, fake news, investment offers, invoices with urgent warnings, and deepfake videos are the entry points for many cyber scams. 

Users are often lured by “high return” investment offers or urgent warnings. The fear of missing out or a desire to avoid trouble means that they click before checking the link's authenticity. 

Fake news and sensationalised headlines tap into human curiosity and fear. When users click before engaging in critical evaluation, they can be exposed to potential scams. However, what makes these scams particularly problematic is the ease with which they can proliferate due to the sharing on social media. 

Youth are often targeted through apps offering something for free. Examples include free AI photo editors, “see who viewed your profile” feature, and tools to find free coupons. These apps are used to harvest credentials, which are then sold on dark web marketplaces and can be used to compromise accounts for ransom.  

Users can verify links by using reputable and free tools like ‘Bitdefender Link Checker’. It allows any URL to be verified and checked for malware and scams. Another useful tool is an AI-powered chatbot called Bitdefender Scamio, which can be used to analyse images, emails or text for potential scams or hidden links.

National Cyber Security Association of Vietnam launched the ‘nTrust’ app in 2024 to fight against online scams. 'nTrust' can be used to verify links, QR codes, account numbers that are possibly linked to scams and can scan installed apps on the smartphone for possible fake apps.

It is always better to think before clicking. Look for obvious signs like website address, signs of secure communication (https), language and domain extensions like .vn. 

Dr Jeff Nijsse, Senior Lecturer in Software Engineering, RMIT Vietnam 

If you receive a link, it’s best practice to ask the sender what it contains or what type of file it is. Avoid PDF or Microsoft Word documents from unverified sources, as these formats are frequently used to conceal malware that can often go undetected. In a professional setting, you can forward messages or emails to your IT department and ask if they are safe.  

Young people are particularly vulnerable to mental risks when exposed to shocking content. (Image: Pexels) Young people are particularly vulnerable to mental risks when exposed to shocking content. (Image: Pexels)

How can exposure to negative, violent or distressing content affect viewers’ psychological wellbeing, particularly children and young people? 

Ms Vu Bich Phuong, RMIT Psychology lecturer  

Being exposed to negative content on social media, or even worse, scams leading to financial losses or breaches of privacy or safety, can be a devastating experience for adolescents. They are not young enough to be fully protected by parents, but neither are they mature nor independent enough to be capable of understanding the shocking content that they see or resolving the consequences of any scams they fall victim too. Most often, young people use social media for positive rewards like entertainment and social connection with peers, so encountering negative content on social media can cause them confusion and distress, adding more layers of anxiety to their existing struggles in their offline lives.  

Although not all interactions on social media are negative, children, adolescents and young adults are particularly vulnerable to toxic content there. This is because young people’s social media activities are largely not age-regulated and unmonitored, while digital networking sites are evolving constantly, and users' media literacy cannot keep up.  

Australia recently enacted a social media ban for children under 16 years of age, and several countries (e.g. Denmark and Malaysia) are looking to follow suit for kids under 15. 

While this may be a well-intentioned effort to protect children from digital harms, young people’s advocacy groups such as UNICEF have pointed out that simply banning children from social media does not make the platforms safer.  

Vietnam needs to consider how to make social media a beneficial tool for all users who are psychologically ready and media-literate, not just for tech giants or predatory scammers. 

Dr Gordon Ingram, RMIT Psychology lecturer  

In the worst cases, viewing violent or disturbing content online can produce a severe effect known as secondhand trauma or secondary traumatic stress. 

This is common among therapists, first responders, and journalists, who bear witness to other people’s traumatic experiences. Young people who randomly click on a link or open an attachment do not have the life experience or training that these other groups have, making them even more vulnerable.  

The effects of secondhand trauma mirror those of post-traumatic stress disorder (PTSD) and include: 

  • intrusive thoughts,  

  • hypervigilant anxiety,  

  • sleep disturbances,  

  • emotional numbness,  

  • and a change in worldview towards seeing the world as a more threatening place or even becoming desensitised to violence.  

The problem is compounded by social media’s repetitive, algorithm-driven feeds, which can negatively reinforce the behaviour of endlessly scrolling through content, looking for the “hit” of a strong emotional reaction, even when this is uncomfortable. 

What does this viral phenomenon reveal about the media literacy skills of Vietnamese users, especially younger audiences? What principles should the public follow to protect themselves when encountering shocking or difficult-to-verify content online? 

Ms Luong Van Lam, Associate Lecturer in Professional Communication, RMIT Vietnam 

Young people’s media literacy is still largely instinctive. They are easily drawn to sensational or mysterious content, then quickly share it with those around them. This behaviour stems from a natural mechanism of the human brain known as negativity bias. Since ancient times, humans have had to pay close attention to danger and warn others in order to survive. Focusing on negative signals has helped us feel safe and protect ourselves. Today, this bias still exists, but in a different form: we tend to pay more attention to negative news than positive news. 

The problem is that many young people stop at exposure and sharing, skipping the crucial middle steps of media literacy: analysis and evaluation, which are needed to form an appropriate response. Reading, feeling shocked and sharing often happen as a reflex. This can also be driven by social motives, such as wanting to be seen as “in the know”, feeling that they are warning the community, or simply wanting enough information to take part in discussions around trending topics. 

In today’s context, especially in the digital environment where information spreads rapidly, instead of viewing, reading and scrolling in a rush, we need to pause and assess information before trusting and sharing it. 

The SIFT method developed by digital literacy expert Mile Caulfiled, can be applied to strengthen the community’s “immunity” against shocking or hard-to-verify content. This framework includes five criteria: 

  1. S – Stop: Pause before reading or sharing. Notice emotional reactions to the headline or information. Headlines often provoke anger or excitement to drive clicks. Ask what you already know about the topic, the source and its reputation 

  2. I - Investigate the source: Look up for credibility, expertise, and possible bias of the author and source publishing the information 

  3. F - Find better coverage: Cross-check what other reliable news sources or independent fact-checkers are reporting about the same topic, and whether they present similar or different perspectives. 

  4. T - Trace to the original – Follow claims, quotes, data, or media back to their original context to see if they are accurate or taken out of context.  

Story: June Pham

Related news