Generative AI and the rise of e-commerce deviant behaviour

Generative AI and the rise of e-commerce deviant behaviour

As the year-end shopping season approaches, e-commerce platforms brace for record-breaking sales and a rise in unethical consumer behaviours. While fraudulent returns and fake reviews have long plagued online retailers, a new twist is emerging: AI-powered return fraud.

Recent reports from China’s Double 11 (11 November) shopping festival reveal a troubling trend. Consumers are using AI-generated images to fake product defects – making fresh fruit appear rotten, pristine dresses look torn, or ceramic mugs seem cracked – to claim refunds without returning goods. These scams exploit refund policies and lean on the platforms’ tendency to favour buyers in disputes.

This phenomenon underscores a broader challenge: generative AI is lowering the barrier and increasing the velocity of deviant behaviour in e-commerce. What once required elaborate photo editing skills can now be done in seconds with free tools. And as AI-generated content becomes more realistic, distinguishing truth from forgery grows harder.

Why generative AI complicates e-commerce integrity

Malicious actors always find unique ways to apply new technologies to conduct criminal and unethical behaviours. Generative AI can be applied to numerous types of criminal and deviant behaviours within the e-commerce environment, from fake advertisements to false returns. 

Generative AI and deepfake technologies can help offenders create fake reviews, images, product information, or advertisements in minutes or seconds. Many of these advanced models from OpenAI, Google, and Midjourney are free or cheap to subscribe to, creating a low barrier for criminal and deviant behaviours. More sophisticated cyber criminals even develop their own models.

Person typing on computer in the dark Generative AI is lowering the barrier and increasing the velocity of deviant behaviour in e-commerce. (Photo: Pexels)

These capabilities impact both the consumer and the merchant. A cyber criminal can set up a fake shop with fake advertisements in an e-commerce marketplace or social media marketplace with realistic-looking videos and photos of products and then delete the account after taking advantage of a number of victims. According to a recent article from Reuters, Meta earns an estimated US$7 billion in revenues from fraudulent advertisements on their platforms, yet seemingly does not do much to prevent this rising issue in the age of generative AI.

Offender motivations and rationalisations come in all shapes and forms. Some offenders are major cyber criminals, and some offenders are just ordinary customers who have experienced financial difficulty. Each often rationalises their actions using neutralisation techniques such as “everyone does it”, “the system is faceless”, and “this doesn’t hurt anybody”. This moral disengagement, amplified by online anonymity, creates fertile ground for unethical practices.

What platforms can do to protect themselves and merchants

Some Chinese platforms have started requiring video evidence for refund claims and rating buyer credibility based on past behaviour. Others deploy AI detectors to flag synthetic images, though accuracy remains a challenge. Beyond technical fixes, platforms should rethink policy design:

  • Limit “refund-only” options (which allow customers to receive a full/partial refund without returning the product if they are dissatisfied with their purchase)
  • Introduce nudges warning against fraudulent returns.
  • Offer store credit instead of cash refunds for suspicious cases.
  • Train staff to spot AI artifacts and escalate disputes.
Joshua Dwight photo Dr Joshua Dwight, Associate Program Manager of the IT and Software Engineering programs, RMIT University Vietnam (Photo: RMIT)

My research suggests that e-commerce offender script analysis can help map offender actions across preparation, activity, and post-activity phases to help platforms anticipate and disrupt deviant behaviours before they escalate.

First, organisations should map out the normal customer journey end to end, then identify interactions and/or input points of their e-commerce environment. Common interaction areas include account creation, payment methods, shipping and delivery, returns, and reviews. Then they should create different scenarios to develop profiles of a spectrum of behaviours, from normal consumer behaviour to criminal hacking. 

Example: “Everybody Does Return Abuse on the Holidays” profile

Organisations can profile a consumer using AI-generated images to abuse e-commerce return policy as follows:

  1. Preparation: The consumer identifies e-commerce companies with policies that allow returns/refunds for damaged goods. They identify, select, and purchase a product. Crucially, they acquire and learn to use AI tools, like Pixlr or Midjourney, to alter the photos of the product to show damage. 
  2. Pre-activity: The consumer receives the undamaged product. They then use the AI tools to digitally alter images of the product, generating false visual evidence of damage to support a fraudulent claim
  3. Activity: The consumer submits a return/refund request, violating company policy by using the fabricated, damaged product photos as evidence, often claiming the item was defective or damaged upon arrival.
  4. Post Activity: The consumer receives the financial gain while retaining the undamaged product.

For the above offender script, organisations can employ different mitigations:

  • At the pre-activity phase, organisations can adjust or limit return policies based on acceptable levels of loss such as 2% of revenues.
  • At the activity phase, an organisation can apply rule-based or machine learning approaches to evaluate the photos or videos submitted as evidence of damage. They can use warning nudges such as pop ups or AI agents to potential influence the consumer away from the unethical behaviour.
  • At the post activity phase, organisations can monitor for repetitive returns and ban the user. 

The bigger picture

Should unethical behaviour and misuse be penalised legally? Yes, however it is not that simple. From a legal and law enforcement perspective, it is very difficult to control since various types of e-commerce fraud tend to be of low value amounts.

In a previous study, I conducted interviews with e-commerce professionals. They specified only three per cent to four per cent of fraud reported was disseminated and investigated by law enforcement. All forms of fraud, including AI-enabled fraud should be prosecutable. However, realistically, law enforcement does not spend a significant number of resources on low-level criminal and deviant behaviour as it is very inefficient, and they may not have the capabilities to collect and prosecute digital crimes.

In the current landscape, e-commerce organisations must take more action to curb AI-driven deception. Otherwise, they risk eroding consumer confidence and imposing heavy costs on merchants. As generative AI becomes ubiquitous, the line between creative use and criminal misuse will blur further. Proactive strategies are essential to safeguard the integrity of online marketplaces.

Story: Dr Joshua Dwight, Associate Program Manager, IT and Software Engineering programs, RMIT University Vietnam

Related news