How Artificial Intelligence Can Be Sexist
Artificial intelligence (AI) is rapidly transforming industries, but its development and deployment often reflect societal biases, including sexism. AI systems can perpetuate gender discrimination through biased training data, stereotypical outputs, and unequal impacts on women and men. Below, we explore how AI can be sexist, the consequences of gender bias, and potential solutions.
How AI Becomes Sexist
- Biased Training Data
AI systems learn from existing datasets, which often contain historical biases. For example:- Amazon’s AI recruitment tool penalized resumes containing the word “women’s” because it was trained on data reflecting male-dominated hiring practices. Such practices as using the past 10 years of data for learning: which heavily reflected a primarily male workspace (Joo-Wha, 2020)
- Apple Card was investigated after complaints of gender discrimination. The algorithm was assigning higher credit limits to men than women, even when they shared the same income and credit scores (Technologies, 2022)
- Generative AI models associate male names with leadership roles like “executive” and female names with domestic roles like “nurse” and “home” (UNESCO, 2024).
- Reinforcement of Stereotypes
Generative AI tools reproduce harmful gender norms:- Image generators hypersexualize women while portraying men in professional roles such as astronauts or inventors. (Lamensch, 2023)
- Text-based models like GPT-3.5 often default to assigning stereotypical roles to genders in stories or prompts. (UN Women, 2024)
- Feminization of AI Assistants
Virtual assistants like Alexa and Siri are often designed with submissive, feminine voices, reinforcing stereotypes that women are suited for service roles (GO, 2023). These assistants have even been subjected to verbal harassment, mirroring real-world gender-based violence (GO, 2023) - Unequal Representation in Decision-Making
In healthcare and finance, biased algorithms can lead to unequal treatment:- Credit scoring systems have been found to favor men over women for loan approvals.(Technologies, 2022)
- Medical AI tools may prioritize male symptoms, leading to misdiagnosis for women. A recent study found that an Ai algorithm that analyzed blood tests to predict liver disease missed 44% of the cases among women, compared to 23% among men. (UCL, 2022)
Consequences of Gender Bias in AI
- Limited Opportunities for Women
Biased algorithms in hiring and education can exclude women from certain fields or roles - Amplification of Discrimination
AI systems can magnify existing inequalities by embedding discriminatory practices into automated processes. For instance, biased facial recognition systems struggle to identify women accurately, particularly women of color, leading to harmful consequences in law enforcement (del Villar, 2025). - Cultural Harm
Hypersexualized portrayals of women in AI-generated content contribute to objectification and perpetuate harmful societal norms.
Addressing Gender Bias in AI
- Diverse Datasets
Training AI systems on inclusive datasets that represent all genders and communities can reduce bias. This involves actively removing historical stereotypes embedded in data (GO, 2023) - Inclusive Development Teams
Employing diverse teams in AI development ensures multiple perspectives are considered, minimizing blind spots that lead to biased outputs (del Villar, 2023) - Public Awareness and Regulation
Educating users about AI bias and implementing ethical guidelines for developers can help identify and mitigate discrimination before deployment (Jo-Wha, 2020). - Continuous Monitoring
Regular audits of AI systems can uncover hidden biases and allow for corrective measures, ensuring fairness over time (del Villar, 2023).
Conclusion
AI is a powerful tool, but its potential is undermined when it replicates societal sexism. By addressing biased training data, fostering diversity in development teams, and promoting ethical oversight, we can create more equitable AI systems that benefit everyone equally. As technology evolves, ensuring inclusivity must remain a priority to prevent the amplification of gender discrimination in our digital future.
Citations:
del Villar, Z. (2025, February 5). How AI reinforces gender bias-and what we can do about it. UN Women – Headquarters. https://www.unwomen.org/en/news-stories/interview/2025/02/how-ai-reinforces-gender-bias-and-what-we-can-do-about-it
Generative AI: UNESCO study reveals alarming evidence of regressive gender stereotypes. UNESCO.org. (n.d.). 2024 https://www.unesco.org/en/articles/generative-ai-unesco-study-reveals-alarming-evidence-regressive-gender-stereotypes
G.O., A. (2023, March 17). Addressing gender bias to achieve ethical AI. IPI Global Observatory. https://theglobalobservatory.org/2023/03/gender-bias-ethical-artificial-intelligence/
Joo-Wha Hong , Sukyoung Choi & Dmitri Williams (2020): Sexist AI: AnExperiment Integrating CASA and ELM, International Journal of Human–Computer Interaction
Lamensch, M. (2023, June 14). Generative AI tools are perpetuating harmful gender stereotypes. Centre for International Governance Innovation. https://www.cigionline.org/articles/generative-ai-tools-are-perpetuating-harmful-gender-stereotypes/
2024, 28 June. (2024, June 28). Artificial Intelligence and gender equality. UN Women – Headquarters. https://www.unwomen.org/en/articles/explainer/artificial-intelligence-and-gender-equality
Technologies, D. (2022, September 20). How gender bias led to the scrutiny of the Apple Card. https://datatron.com/how-gender-bias-led-to-the-scrutiny-of-the-apple-card/#:~:text=It%20was%20found%20that%20women,social%20security%20number%2C%20and%20birthdate.
Ucl. (2022, July 13). Gender bias revealed in AI tools screening for liver disease. UCL News. https://www.ucl.ac.uk/news/2022/jul/gender-bias-revealed-ai-tools-screening-liver-disease#:~:text=11%20July%202022&text=The%20researchers%20found%20that%20the,for%20women%20compared%20to%20men.