The AI era has a double gender gap – here’s how to close it

The AI era has a double gender gap – here’s how to close it

Studies show women are still underrepresented in AI labs and less likely to use AI at work. Two RMIT Vietnam academics explain why those gaps matter and how we can flip AI into a force for equity.

Across the world, AI is becoming an everyday tool. Yet a double gender gap is opening: too few women are involved in building AI systems, and too few women actively use it.

A 2024 survey of 25,000 working adults found that 59% of men versus 51% of women use generative AI at least once a week. Among 18 to 24‑year‑olds, the gender gap is even wider at 71% versus 59%.

On the supply side, women are still underrepresented in the teams that design and test AI. The World Economic Forum estimated in 2023 that women make up only about 30% of the global AI workforce. As a result, AI systems risk repeating old stereotypes, and the productivity gains offered by AI may not reach everyone equally.

Why “who builds AI” matters

According to RMIT Senior Lecturer in Artificial Intelligence Dr Thuy Nguyen, AI and gender equity are deeply intertwined because AI systems learn from real‑world data, and that data includes long‑standing gender imbalances. Without diverse voices to spot problems early, these patterns get baked into the system and spread widely.

“AI can become biased when not enough women design or test it. If the teams building it are mostly men, they might miss blind spots that affect women,” she says.

Dr Thuy Nguyen photo Dr Thuy Nguyen, Senior Lecturer in Artificial Intelligence, School of Science, Engineering & Technology, RMIT University Vietnam

Dr Thuy points to the well-known 2018 case involving Amazon’s experimental hiring tool. Trained on years of resumes from a male‑dominated tech workforce, the system learned to downgrade applications containing words like “women’s”, such as “women’s chess club captain” or references to women’s colleges. It also favoured action verbs that were more common in men’s resumes.

Amazon scrapped the tool when the bias became clear, but Dr Thuy says it illustrates how a lack of women in design, testing, and review can let hidden biases sneak in and get amplified.

She also notes that many AI teams consist of mostly men from similar backgrounds, so they can miss how systems unfairly treat women or people from non-Western cultures. “Having more women, including more Asian women, in AI fields is important because diverse teams bring different life experiences, perspectives, and cultural insights that help spot and fix hidden biases before they spread,” she says.

Asian women can notice issues others may overlook, such as misrepresented East Asian facial features, the sexualisation of Asian women in generated images, or the use of Western standards that do not fit local contexts.

Greater diversity helps catch these problems early. “Diverse voices in developer teams push for better, more inclusive data, fairer testing, and more ethical choices,” Dr Thuy says. “That is how we create AI that works better for everyone instead of amplifying old inequalities.”

Why “who uses AI” matters

On the other side of the double gap is AI uptake. Associate Professor Catherine Earl, a social anthropologist from RMIT Vietnam’s School of Communication & Design, warns that unequal AI adoption can translate into unequal opportunities.

If women, especially younger women or those without university degrees, use AI less often, they risk being left out of new opportunities.

For women in business, existing inequalities and lack of access to sponsorship, promotions, and other career development may be reinforced by AI. McKinsey’s Women in the Workplace report 2025 identifies a wide range of areas in which women risk disadvantage compared to men – risks that may be ingrained by AI.

Catherine Earl photo Associate Professor Catherine Earl, School of Communication & Design, RMIT University Vietnam

Across Vietnam and Southeast Asia, many women still have weaker digital access and skills, which adds to their lack of confidence using AI.

Associate Professor Earl argues that part of the problem is how society frames AI. Instead of viewing AI as being reserved for the tech-minded, society should consider it a basic skill for all.

“In Vietnam’s past, literacy and calligraphy were tasks of a small group of educated elites, generally not including women. But now women have equal literacy with men. Reading and writing are basic skills,” she says. “Similarly, AI use should be accepted as a basic skill of literacy and not something ‘new’ or ‘special’ used by a few.”

Closing the gender gap in AI

Associate Professor Earl believes that if schools and workplaces regard AI use as a required literacy, then girls and boys, women and men would use it confidently as part of normal activities.

“Overcoming an AI literacy gap between women and men at the current time should be addressed as an issue of basic education. Women’s AI literacy classes and programs could be implemented in the same way that women’s reading and writing classes improved and equalised basic literacy for past generations.”

At the workplace, Vietnamese employers can take a leading role in developing proactive AI inclusion policies and practices for female employees and female leaders. “Sponsoring female staff to build their careers with AI as a basic literacy would help to level the playing field and overcome potential gender discrimination at work,” she says.

Meanwhile, Dr Thuy stresses the importance of strengthening women’s participation in AI research and development. This includes scholarships, mentoring, and leadership tracks that encourage more young women to join the field and rise through it.

She recommends that companies can mitigate gender bias in AI by embedding inclusion throughout development – hiring and promoting more women, using diverse training data, and conducting regular bias audits before and after deployment.

For example, in hiring, companies must actively test their AI for gender bias by checking if certain job applicants, roles, or behaviours are being treated unfairly. “Responsible AI can strip out human biases in hiring or promotions by focusing purely on skills and potential instead of names, photos, or career break gaps,” she says.

The AI educator believes the potential for good is massive because AI can be scaled up quickly and cheaply.

“One well-designed, inclusive system can help millions of women access better opportunities, safer environments, or fairer decisions,” she says. “It’s important to remember that equitable AI isn’t automatic. It’s engineered and learned.”

Story: Ngoc Hoang

Related news