By Tanya Goldman, Tori Coan and Katherine Gallagher Robbins
Table of Contents
I. Introduction
II. Risks of AI for Women Workers
A. Gender gaps in the AI workforce
B. Potential adoption gaps of generative AI in the workplace
C. Rational caution may affect women’s use of generative AI
D. Technology-facilitated harassment
E. Algorithmic management and worsening job quality
III. Implications for Women’s Jobs
A. Mixed data on how women’s jobs will be affected
B. Who will be most impacted: exploring gender and race ethnicity in AI-vulnerable occupations
IV. Policy Landscape
V. Policy Recommendations
A. Enforce existing anti-discrimination and other worker protection laws
B. Establish strong federal protections for workers in AI-enabled workplaces, while preserving state authority to regulate AI
C. Establish legal standards that address accountability for developers, deployers and platforms for AI-enabled harassment
D. Center gender equity and intersectional analysis in AI regulation and research and promote gender diversity in the AI workforce
E. Center worker voice in AI governance
F. Strengthen social insurance for labor market transitions
VI. Conclusion
Introduction
Artificial intelligence (AI) is reshaping labor markets and work in ways that will profoundly affect women workers, who comprise almost half the workforceNational Partnership calculations from U.S. Bureau of Labor Statistics. (2026, April.) Table A-1. Employment status of the civilian population by sex and age. Retrieved 6 April 2026, from https://www.bls.gov/news.release/empsit.t01.htm yet face distinct vulnerabilities and opportunities with the growth of AI. This brief reviews existing research on the impact of AI for women workers and provides new data on how women workers are overrepresented in occupations where they may be particularly affected by AI in the workplace. It discusses ways AI technologies impact women’s employment and working conditions, as well as risks of exacerbating existing workplace inequities and barriers for women. Finally, this brief identifies knowledge gaps that could inform a better understanding of whether AI will automate or augment women’s work and the need for evidence-based policy responses.
Risks of AI for Women Workers
Gender gaps in the AI workforce
Women remain significantly underrepresented in the AI development workforce, from software engineers to the leaders shaping these technologies.WomenTech Network. (n.d.). Women in The Workforce: The Economic Gender Gap. Retrieved 1 April 2026, from https://www.womentech.net/women-in-tech-stats; Hupfer, S. et al. (2024, November). Women and Generative AI: The Adoption Gap is Closing Fast, but a Trust Gap Persists. Retrieved 6 April 2026, from Deloitte website: https://www.deloitte.com/us/en/insights/industry/technology/technology-media-and-telecom-predictions/2025/women-and-generative-ai.html This underrepresentation means women have less influence over how AI systems are designed, deployed and regulated – even as these systems increasingly shape their work lives.
Potential adoption gaps of generative AI in the workplace
Research on gender differences in AI adoption presents a complex and evolving picture. An analysis examining 18 different studies estimated a 25 percent gap in use of generative AI tools, like ChatGPT and Claude, by women compared with men.Blanding, M. (2025, February 20). Women Are Avoiding AI. Will Their Careers Suffer? Retrieved 6 April 2026 from Harvard Business School website: https://www.library.hbs.edu/working-knowledge/women-are-avoiding-using-artificial-intelligence-can-that-hurt-their-careers; Otis, N.G., Delecourt, S., Cranney, K., & Koning, R. (2025). Global Evidence on Gender Gaps and Generative AI. Harvard Business School Working Paper No. 25-023. Retrieved 6 April 2026, from https://www.hbs.edu/ris/Publication%20Files/25023_52957d6c-0378-4796-99fa-aab684b3b2f8.pdf These researchers expressed concern that as AI tools become more widespread in certain occupations, lower adoption rates among women could exacerbate gender wage gaps and hinder career advancement.Blanding, M. (2025, February 20). Women Are Avoiding AI. Will Their Careers Suffer? Retrieved 6 April 2026 from Harvard Business School website: https://www.library.hbs.edu/working-knowledge/women-are-avoiding-using-artificial-intelligence-can-that-hurt-their-careers This data is also consistent with recent polling, which shows women are less likely to report using AI tools at work. Lean In (n.d.). New Research: Women Use AI Less Often at Work and Get Less Credit. Retrieved 2 April 2026, from https://leanin.org/research/ai-women-gender-gap-data; Data for Progress & Groundwork Collaboration. (n.d.). [Survey Data] Retrieved 6 April 2026, from https://www.filesforprogress.org/datasets/2026/3/dfp_gwc_ffp_25_08_17_tabs.pdf
However, separate data suggests women’s use of generative AI is trending upward and may be reaching parity. In an analysis of global ChatGPT users from 2022-2024, users with typically female names represented 42 percent of the tool’s 200 million users.Blanding, M. (2025, February 20). Women Are Avoiding AI. Will Their Careers Suffer? Retrieved 6 April 2026 from Harvard Business School website: https://www.library.hbs.edu/working-knowledge/women-are-avoiding-using-artificial-intelligence-can-that-hurt-their-careers; Chatterji, A., Cunningham, T., Deming, D.J., Hitzig, Z., Ong, C., Shan, C.Y., & Wadman, K. (2025, September). How People Use ChatGPT. National Bureau of Economic Research Working Paper No. 34255. Retrieved 6 April, 2026, from https://doi.org/10.3386/w34255 By July 2025, likely women users made up a slight majority of users.Chatterji, A., Cunningham, T., Deming, D.J., Hitzig, Z., Ong, C., Shan, C.Y., & Wadman, K. (2025, September). How People Use ChatGPT. National Bureau of Economic Research Working Paper No. 34255. Retrieved 6 April, 2026, from https://doi.org/10.3386/w34255 They were also more likely to use ChatGPT for “writing” tasks (such as editing or critiquing text and personal writing) and “practical guidance” (including how-to advice).Chatterji, A., Cunningham, T., Deming, D.J., Hitzig, Z., Ong, C., Shan, C.Y., & Wadman, K. (2025, September). How People Use ChatGPT. National Bureau of Economic Research Working Paper No. 34255. Retrieved 6 April, 2026, from https://doi.org/10.3386/w34255
Despite this trend toward parity in overall use, gendered differences exist in how women use these tools.Chatterji, A., Cunningham, T., Deming, D.J., Hitzig, Z., Ong, C., Shan, C.Y., & Wadman, K. (2025, September). How People Use ChatGPT. National Bureau of Economic Research Working Paper No. 34255. Retrieved 6 April, 2026, from https://doi.org/10.3386/w34255 Since non-work-related messages comprise over 70 percent of all ChatGPT use,Chatterji, A., Cunningham, T., Deming, D.J., Hitzig, Z., Ong, C., Shan, C.Y., & Wadman, K. (2025, September). How People Use ChatGPT. National Bureau of Economic Research Working Paper No. 34255. Retrieved 6 April, 2026, from https://doi.org/10.3386/w34255 understanding whether women are using generative AI for work-related or personal purposes is essential to understand adoption gaps. Other studies have similarly found gender gaps in women’s regular engagement with generative AI, though those gaps are closing.Hupfer, S. et al. (2024, November). Women and Generative AI: The Adoption Gap is Closing Fast, but a Trust Gap Persists. Retrieved 6 April 2026, from Deloitte website: https://www.deloitte.com/us/en/insights/industry/technology/technology-media-and-telecom-predictions/2025/women-and-generative-ai.html These studies likewise confirm that even when women engage more with AI, they express more concern with the technology than men and have less trust in its security.Hupfer, S. et al. (2024, November). Women and Generative AI: The Adoption Gap is Closing Fast, but a Trust Gap Persists. Retrieved 6 April 2026, from Deloitte website: https://www.deloitte.com/us/en/insights/industry/technology/technology-media-and-telecom-predictions/2025/women-and-generative-ai.html
Rational caution may affect women’s use of generative AI
There are numerous reasons women approach generative AI in the workplace with caution. Women’s concerns about these tools are often well-founded and reflect both technical limitations and workplace realities.
Accuracy and bias concerns
Women may reasonably question the reliability of generative AI tools, particularly when those tools reflect existing biases.Lean In (n.d.). New Research: Women Use AI Less Often at Work and Get Less Credit. Retrieved 2 April 2026, from https://leanin.org/research/ai-women-gender-gap-data As one author noted, “When women engage with systems that they’ve been largely left out of creating, the products can feel foreign, awkward, or even hostile.”Bolis, M. (2025, October). The AI Gender Gap Paradox. Stanford Social Innovation Review. Retrieved 6 April 2026, from https://doi.org/10.48558/FQ17-J361 When AI systems are trained on biased data, they produce biased outputs.Hupfer, S. et al. (2024, November). Women and Generative AI: The Adoption Gap is Closing Fast, but a Trust Gap Persists. Retrieved 6 April 2026, from Deloitte website: https://www.deloitte.com/us/en/insights/industry/technology/technology-media-and-telecom-predictions/2025/women-and-generative-ai.html For example, one experiment prompted ChatGPT to generate resumes using typical male or female names.Stanford University. (2025, October). Researchers Uncover AI Bias Against Older Working Women. Stanford Report. Retrieved 6 April 2026, from https://news.stanford.edu/stories/2025/10/ai-llms-age-bias-older-working-women-research ChatGPT created resumes for women that portrayed them as younger and less experienced than the men’s resumes.Stanford University. (2025, October). Researchers Uncover AI Bias Against Older Working Women. Stanford Report. Retrieved 6 April 2026, from https://news.stanford.edu/stories/2025/10/ai-llms-age-bias-older-working-women-research When ChatGPT evaluated the resumes, it rated the older men’s resumes higher, demonstrating and reinforcing bias in the model.Stanford University. (2025, October). Researchers Uncover AI Bias Against Older Working Women. Stanford Report. Retrieved 6 April 2026, from https://news.stanford.edu/stories/2025/10/ai-llms-age-bias-older-working-women-research Another study found that generative AI provided women worse advice than men on employment decisions, such as salary negotiations.Bolis, M. (2025, October). The AI Gender Gap Paradox. Stanford Social Innovation Review. Retrieved 6 April 2026, from https://doi.org/10.48558/FQ17-J361 Biased outputs may be further reinforced by gender gaps in the use of generative AI tools, since these systems continue to learn from user interactions.Hupfer, S. et al. (2024, November). Women and Generative AI: The Adoption Gap is Closing Fast, but a Trust Gap Persists. Retrieved 6 April 2026, from Deloitte website: https://www.deloitte.com/us/en/insights/industry/technology/technology-media-and-telecom-predictions/2025/women-and-generative-ai.html
Transparency and reliability
The lack of transparency about how AI systems work, combined with experiences of AI hallucinations (when AI generates false or nonsensical information), reinforces many women’s concerns about the value and trustworthiness of these systems.Bolis, M. (2025, October). The AI Gender Gap Paradox. Stanford Social Innovation Review. Retrieved 6 April 2026, from https://doi.org/10.48558/FQ17-J361 Hallucinations happen because LLMs are not just retrieving information. They analyze data to predict the best response.Metz, C. & Weise, K. (2025, May). A.I. Is Getting More Powerful, but Its Hallucinations Are Getting Worse. The New York Times. Retrieved 18 April 2026, from https://www.nytimes.com/2025/05/05/technology/ai-hallucinations-chatgpt-google.html; Wiggers, K. (2024). Study Suggests That Even the Best AI Models Hallucinate a Bunch. TechCrunch. Retrieved 6 April 2026, from https://techcrunch.com/2024/08/14/study-suggests-that-even-the-best-ai-models-hallucinate-a-bunch/ Hallucinations can ensure users receive fast responses, even if inaccurate. This may explain why women in one survey were 29 percent more likely than men to question AI’s accuracy and 38 percent more likely to have ethical concerns about using AI.Lean In (n.d.). New Research: Women Use AI Less Often at Work and Get Less Credit. Retrieved 2 April 2026, from https://leanin.org/research/ai-women-gender-gap-data; Data for Progress & Groundwork Collaboration. (n.d.). [Survey Data] Retrieved 6 April 2026, from https://www.filesforprogress.org/datasets/2026/3/dfp_gwc_ffp_25_08_17_tabs.pdf
Security and privacy
Privacy breaches in AI systems have repeatedly exposed users’ sensitive information. In 2025, thousands of ChatGPT conversations containing medical questions, business strategies, and seeking relationship advice surfaced in Google search results when users inadvertently made them publicly discoverable.Stokel-Walker, C. (2025, July 30). Exclusive: Google is Indexing ChatGPT Conversations, Potentially Exposing Sensitive User Data. Fast Company. Retrieved 6 April 2026, from https://www.fastcompany.com/91376687/google-indexing-chatgpt-conversations While the company disabled those capabilities, these incidents demonstrate how easily information shared with AI tools can become exposed beyond the user’s control or intention. Employers can also collect significant amounts of data about their workers, often without their knowledge or consent, and even sell it to third parties, raising civil rights and privacy concerns.Bernhardt, A., Kresge, L., & Suleiman, R. (2021, November). Data and Algorithms at Work: The Case for Worker Technology Rights. Retrieved 8 April 2026, from UC Berkeley Labor Center website: https://laborcenter.berkeley.edu/data-algorithms-at-work/
Professional penalties for AI use
Women may rightfully be concerned about the ethics or professional consequences of using AI tools in the workplace.Otis, N.G., Delecourt, S., Cranney, K., & Koning, R. (2025). Global Evidence on Gender Gaps and Generative AI. Harvard Business School Working Paper No. 25-023. Retrieved 6 April 2026, from https://www.hbs.edu/ris/Publication%20Files/25023_52957d6c-0378-4796-99fa-aab684b3b2f8.pdf In one study, participants evaluated identical code snippets that were labeled as written with or without AI assistance.Acar, O.A., Gai, P.J., Tu, Y. & Hou, J. (2025, August). Research: The Hidden Penalty of Using AI at Work. Harvard Business Review. Retrieved 6 April 2026, from https://hbr.org/2025/08/research-the-hidden-penalty-of-using-ai-at-work When participants believed a woman had written the code with the help of AI, the competency penalty for women was more than double that for men writing the same code.Acar, O.A., Gai, P.J., Tu, Y. & Hou, J. (2025, August). Research: The Hidden Penalty of Using AI at Work. Harvard Business Review. Retrieved 6 April 2026, from https://hbr.org/2025/08/research-the-hidden-penalty-of-using-ai-at-work This illustrates that pre-existing gendered workplace dynamics and discrimination can make women more vulnerable to negative consequences of AI adoption.
Technology-facilitated harassment
AI tools have introduced new methods of harassment in professional contexts. Generative AI tools can be manipulated to produce sexualized content about women, including coworkers, and these technologies can be weaponized to intimidate, humiliate or professionally damage women.Mingeirou, K., Osman, Y. & Rafin, R. (2026, February). The Impact of Artificial Intelligence on Violence Against Women and Girls. Retrieved 8 April 2026, from Stimson Center website: https://www.stimson.org/2026/the-impact-of-artificial-intelligence-on-violence-against-women-and-girls/#elementor-toc__heading-anchor-0 The ease and relative anonymity of creating such content, combined with weak and underenforced legal frameworks creates serious risks for women in the workplace.
Women in the public sphere, including celebrities, politicians, journalists and business leaders, have been targets of AI-enabled harassment campaigns. Grok, xAI’s chatbot, was used, for example, to create and distribute nonconsensual deepfake sexualized images of women.Hayes, C., Chia, O. & McMahon, L. (2026, January 15). X to Stop Grok AI from Undressing Images of Real People After Backlash. BBC. Retrieved 6 April 2026, from https://www.bbc.com/news/articles/ce8gz8g2qnlo The images spread rapidly before xAI temporarily restricted Grok’s image generation capabilities. The incident illustrated how easily accessible AI tools can be exploited for harassment and cause significant harm. For women workers, the threat extends beyond public figures: the same technology that can target celebrities can be used against ordinary employees to create hostile work environments and damage professional reputations.
Algorithmic management and worsening job quality
AI introduces new forms of management in workplaces, using algorithms and automated systems that control scheduling, task assignments, performance monitoring and evaluation, wage determination and disciplinary actions.Tung, I., Sonn, P., Pinto, M., Dworack-Fisher, S. & Boxerman, J. (2025, July). When ‘Bossware’ Manages Workers: A Policy Agenda to Stop Digital Surveillance and Automated-Decision-System Abuses. Retrieved 6 April 2026 from National Employment Law Project website: https://www.nelp.org/app/uploads/2025/07/When-Bossware-Manages-Workers-Policy-Agenda-July-2025.pdf These tools can undermine workers’ rights and protections in multiple ways, including creating unsafe working conditions, enabling discriminatory practices, and eroding job quality and stability.Tung, I., Sonn, P., Pinto, M., Dworack-Fisher, S. & Boxerman, J. (2025, July). When ‘Bossware’ Manages Workers: A Policy Agenda to Stop Digital Surveillance and Automated-Decision-System Abuses. Retrieved 6 April 2026 from National Employment Law Project website: https://www.nelp.org/app/uploads/2025/07/When-Bossware-Manages-Workers-Policy-Agenda-July-2025.pdf Employers also use AI to hide the extensive control they exert over workers they deem independent contractors, exacerbating the misclassification of employees.Tung, I., Sonn, P., Pinto, M., Dworack-Fisher, S. & Boxerman, J. (2025, July). When ‘Bossware’ Manages Workers: A Policy Agenda to Stop Digital Surveillance and Automated-Decision-System Abuses. Retrieved 6 April 2026 from National Employment Law Project website: https://www.nelp.org/app/uploads/2025/07/When-Bossware-Manages-Workers-Policy-Agenda-July-2025.pdf
Algorithmic management systems often lack transparency, making it difficult for workers to understand how decisions affecting their livelihoods are made. This opacity can perpetuate and amplify bias in task assignment, performance evaluation and pay determination.Building an AI-Ready America: Adopting AI at Work: Hearing before the Subcommittee on Health, Employment, Labor, and Pensions, of the House Education and Workforce Committee, 119th Cong. (2026) (testimony of Tanya L. Goldman). Retrieved 6 April 2026, from https://edworkforce.house.gov/uploadedfiles/goldman_testimony.pdf Workers may face constant surveillance, unpredictable schedules that make caregiving responsibilities difficult to manage and automated discipline for failing to meet algorithmically determined productivity standards – often without human review or the ability to appeal.Building an AI-Ready America: Adopting AI at Work: Hearing before the Subcommittee on Health, Employment, Labor, and Pensions, of the House Education and Workforce Committee, 119th Cong. (2026) (testimony of Tanya L. Goldman). Retrieved 6 April 2026, from https://edworkforce.house.gov/uploadedfiles/goldman_testimony.pdf
One such tool – time on task surveillance measures – can pressure workers to work constantly, and make it difficult, if not impossible, to take bathroom or water breaks. These systems may result in discrimination against workers who have accommodations because of pregnancy, disability or religion. For example, a pregnant worker may face unfair discipline if they do not meet an algorithmically determined productivity standard, even if they have an accommodation to take breaks to use the restroom.Ng, A. & Rubin, B.F.. (2019, May 6). Amazon fired these 7 pregnant workers. Then came the lawsuits. CNET. Retrieved 6 April 2026, from https://www.cnet.com/tech/tech-industry/features/amazon-fired-these-7-pregnant-workers-then-came-the-lawsuits/; Crispin, J. (2021, July 5). Welcome to dystopia: getting fired from your job as an Amazon worker by an app. The Guardian. Retrieved 6 April 2026, from https://www.theguardian.com/commentisfree/2021/jul/05/amazon-worker-fired-app-dystopia
These systems may be disproportionately deployed in sectors that increasingly rely on women workers, particularly women of color and immigrant women. This includes “gig economy” platforms such, as DoorDash and ShiftMed, and platforms for nursingWells, K. & Spilda, F.U. (2024). Uber for Nursing: How an AI- Powered Gig Model Is Threatening Health Care. Retrieved 18 April 2026, from The Roosevelt Institute website: https://rooseveltinstitute.org/wp-content/uploads/2024/12/RI_Uber-for-Nursing_Brief_202412.pdf; Mateescu, A. (2021, Nov.). Electronic Visit Verification: The Weight of Surveillance and the Fracturing of Care. Retrieved 1 April 2026, from Data & Society website, https://datasociety.net/library/electronic-visit-verification-the-weight-of-surveillance-and-the-fracturing-of-care/; Tung, I., Sonn, P., Pinto, M., Dworack-Fisher, S. & Boxerman, J. (2025, July). When ‘Bossware’ Manages Workers: A Policy Agenda to Stop Digital Surveillance and Automated-Decision-System Abuses. Retrieved 6 April 2026 from National Employment Law Project website: https://www.nelp.org/app/uploads/2025/07/When-Bossware-Manages-Workers-Policy-Agenda-July-2025.pdf and warehouse work,Tung, I., Sonn, P., Pinto, M., Dworack-Fisher, S. & Boxerman, J. (2025, July). When ‘Bossware’ Manages Workers: A Policy Agenda to Stop Digital Surveillance and Automated-Decision-System Abuses. Retrieved 6 April 2026 from National Employment Law Project website: https://www.nelp.org/app/uploads/2025/07/When-Bossware-Manages-Workers-Policy-Agenda-July-2025.pdf where algorithmic systems dictate nearly every aspect of the workday – from break times to the pace of work to whether someone keeps their job.
Implications for Women’s Jobs
Mixed data on how women’s jobs will be affected
There is ongoing debate about the extent to which AI will automate, augment, or otherwise transform workers’ jobs.Compare Gimbel, M., Kinder, M., Kendall, J. & Lee, M. (2025, October). Evaluating the Impact of AI on the Labor Market: Current State of Affairs. Retrieved 8 April 2026, from Yale Budget Lab website: https://budgetlab.yale.edu/research/evaluating-impact-ai-labor-market-current-state-affairs with Brynjolfsson, E., Chandar, B. & Chen, R. (2025, November). Canaries in the Coal Mine? Six Facts about the Recent Employment Effects of Artificial Intelligence. Retrieved 8 April 2026, from Stanford Digital Economy Lab website: https://digitaleconomy.stanford.edu/publication/canaries-in-the-coal-mine-six-facts-about-the-recent-employment-effects-of-artificial-intelligence/ Women workers are concentrated in sectors with varying levels of AI exposure, making predictions about overall impact complex. The impact of AI on women workers depends, in part, on what type of AI tool is considered. Existing research examines the impact of generative AI, but other forms of AI, including in robotics, may have different implications.Weise, K. (2025, Oct. 21). Amazon Plans to Replace More Than Half a Million Jobs with Robots. The New York Times. Retrieved 2 April 2026, from https://www.nytimes.com/2025/10/21/technology/inside-amazons-plans-to-replace-workers-with-robots.html
The type of work affects how much women are exposed specifically to generative AI tools. One analysis of Claude users found AI usage concentrated among workers in the upper quartile of wages (such as software developers), though within that quartile there was a drop-off in use for the highest wage workers, such as doctors.Handa, K., et al. (2025). Which Economic Tasks are Performed with AI? Evidence from Millions of Claude Conversations. Retrieved 6 April 2026, from Anthropic website: https://assets.anthropic.com/m/2e23255f1e84ca97/original/Economic_Tasks_AI_Paper.pdf Low-wage workers also had lower use rates.Handa, K., et al. (2025). Which Economic Tasks are Performed with AI? Evidence from Millions of Claude Conversations. Retrieved 6 April 2026, from Anthropic website: https://assets.anthropic.com/m/2e23255f1e84ca97/original/Economic_Tasks_AI_Paper.pdf This usage pattern suggests that middle- and upper-income professional women may experience different impacts from generative AI than women in lower-wage service and care work.
Women are concentrated in care-based sectors, including nursing, child care, and home health care,Mason, J. and Robbins, K.G. (2023, March). Women’s Work is Undervalued, and It’s Costing Us Billions. Retrieved 1 April 2026, from National Partnership for Women & Families website: https://nationalpartnership.org/wp-content/uploads/2023/04/womens-work-is-undervalued.pdf that are unlikely to be fully automated because these roles require emotional intelligence, human connection, physical presence and capacity, and complex interpersonal skills that AI cannot replicate. Workers in these sectors are, however, affected by different uses of AI in the workplace, including through algorithmic management systems used for scheduling, performance monitoring, and surveillance, worsening their job quality even if not replacing their jobs.Wells, K. & Spilda, F.U. (2024). Uber for Nursing: How an AI- Powered Gig Model Is Threatening Health Care. Retrieved 18 April 2026, from The Roosevelt Institute website: https://rooseveltinstitute.org/wp-content/uploads/2024/12/RI_Uber-for-Nursing_Brief_202412.pdf
Who will be most impacted: exploring gender and race ethnicity in AI-vulnerable occupations
Our analysis shows that there are millions of women workers whose jobs will likely be affected by AI, and who may lack the necessary tools to adapt if displaced from their jobs.See also Manning, S. & Aguirre, T. (2026). The Economics of Transformative AI. University of Chicago Press, chap. 8, https://www.nber.org/books-and-chapters/economics-transformative-ai/how-adaptable-are-american-workers-ai-induced-job-displacement; Manning, S., Aguirre, T., Muro, M. & Methkupally, S. (2026, January). Measuring US Workers’ Capacity to Adapt to AI-Driven Job Displacement. Retrieved 6 April 2026, from Brookings website: https://www.brookings.edu/articles/measuring-us-workers-capacity-to-adapt-to-ai-driven-job-displacement/ Because of the nation’s history of undervaluing and underpaying for the labor of women, particularly women of color, women are overrepresented in certain jobs with lower wages and worse benefits and conditions.Mason, J. and Robbins, K.G. (2023, March). Women’s Work is Undervalued, and It’s Costing Us Billions. Retrieved 1 April 2026, from National Partnership for Women & Families website: https://nationalpartnership.org/wp-content/uploads/2023/04/womens-work-is-undervalued.pdf; Mason, J. (2023, March). Occupational segregation – a legacy of racism, sexism and ableism – is a major contributor to the wage gap [Blog post]. Retrieved 6 April 2026, from https://nationalpartnership.org/occupational-segregation-a-legacy-of-racism-sexism-and-ableism-is-a-major-contributor-to-the-wage-gap/ As a result, women workers are concentrated in sectors that are likely to be affected by AI, such as administrative support, clerical work and customer service.Berg, J., Kamiński, K., Konopczyński, F., Ładna, A., Rosłaniec, K. & Troszyński, M. (n.d.). Generative AI and Jobs: A Refined Global Index of Occupational Exposure. International Labour Organization Working Paper No. 140. Retrieved 6 April 2026, from https://webapps.ilo.org/static/english/intserv/working-papers/wp140/index.html
Extensive research has looked at occupations and classified them as “AI-exposed.”del Rio-Chanona, R.M., Ernst, E., Merola, R., Samaan, D. & Teutloff, O. (2025). AI and jobs. A review of theory, estimates, and evidence. Retrieved 8 April 2026, from https://arxiv.org/pdf/2509.15265; Eloundou, T., Manning, S., Mishkin, P., & Rock, D. (2024). GPTs are GPTs: An Early Look at the Labor Market Impact Potential of Large Language Models. Science, 384(6702), 1306-1308. Retrieved 8 April 2026, from https://files.lafm.com.co/assets/public/2023-03/2303.10130.pdf That is, where might labor market responses to AI initially show up. New research by Manning and Aguirre has added an analysis of “adaptive capacity.”Manning, S. & Aguirre, T. (2026). The Economics of Transformative AI. University of Chicago Press, chap. 8, https://www.nber.org/books-and-chapters/economics-transformative-ai/how-adaptable-are-american-workers-ai-induced-job-displacement; Manning, S., Aguirre, T., Muro, M. & Methkupally, S. (2026, January). Measuring US Workers’ Capacity to Adapt to AI-Driven Job Displacement. Retrieved 6 April 2026, from Brookings website: https://www.brookings.edu/articles/measuring-us-workers-capacity-to-adapt-to-ai-driven-job-displacement/ Looking at factors such as “workers’ savings, age, labor market density, and skill transferability,” the researchers evaluated the potential ability of workers displaced by LLMs to have an unemployment cushion and be able to transition to new work.Manning, S. & Aguirre, T. (2026). The Economics of Transformative AI. University of Chicago Press, chap. 8, https://www.nber.org/books-and-chapters/economics-transformative-ai/how-adaptable-are-american-workers-ai-induced-job-displacement The researcher’s adaptive capacity framing measures workers’ ability to weather the financial shocks of a layoff and their skills and opportunity to find a new job. The researchers find that there are approximately 6 million workers in AI-exposed jobs and lacking adaptive capacity. Of these, more than 8 in 10 are women.Manning, S. & Aguirre, T. (2026). The Economics of Transformative AI. University of Chicago Press, chap. 8, https://www.nber.org/books-and-chapters/economics-transformative-ai/how-adaptable-are-american-workers-ai-induced-job-displacement These workers are largely in office jobs, such as clerical and administrative work.Manning, S. & Aguirre, T. (2026). The Economics of Transformative AI. University of Chicago Press, chap. 8, https://www.nber.org/books-and-chapters/economics-transformative-ai/how-adaptable-are-american-workers-ai-induced-job-displacement
We expand on Manning and Aguirre’s analysis, estimating the race, ethnicity and gender of workers in the 15 occupations they have designated as having both the highest AI exposure and the lowest adaptive capacity – a combination we refer to as “AI-vulnerable” occupations.Author calculations using American Community Survey 2019-2023 5 Year Estimate Microdata. See method note for full methodology. Our analysis finds that while women make up slightly less than half of all workers (47 percent), they account for 83 percent of those employed in AI-vulnerable occupations. Women of color account for 31 percent of workers in the 15 most AI-vulnerable jobs. There are 6 million women working in AI-vulnerable occupations.Because our analysis uses different data sources, the numbers of women in these occupations are not identical to Manning, S. & Aguirre, T. (2026). The Economics of Transformative AI. University of Chicago Press, chap. 8, https://www.nber.org/books-and-chapters/economics-transformative-ai/how-adaptable-are-american-workers-ai-induced-job-displacement; Manning, S., Aguirre, T., Muro, M. & Methkupally, S. (2026, January). Measuring US Workers’ Capacity to Adapt to AI-Driven Job Displacement. Retrieved 6 April 2026, from Brookings website: https://www.brookings.edu/articles/measuring-us-workers-capacity-to-adapt-to-ai-driven-job-displacement/ . White women, Latinas, and American Indian and Alaska Native (AIAN) women are particularly overrepresented in AI-vulnerable occupations – their shares of the most AI-vulnerable jobs are nearly double their shares of the overall workforce. Black and multiracial women’s shares of the most AI-vulnerable jobs are more than one and a half times larger than their shares of the workforce overall.
Women are half or more of workers in all of the 15 most AI-vulnerable occupations except for one: property appraisers and assessors. Women of color make up nearly one-third or more of workers in eight of the 15 occupations, compared to less than one-fifth of the workforce overall. Of these 15 occupations, women of color are the most disproportionately represented among government program eligibility interviewers and interpreters and translators, where they respectively make up 46 and 45 percent of workers.
Policy Landscape
The regulatory environment for AI in the workplace is insufficient to protect workers’ rights. The Trump administration has prioritized the tech industry’s push for unrestricted AI development, opposing the implementation of critical guardrails needed to protect workers, including women. This approach favors rapid technological deployment over worker safety, privacy and equity. When surveyed, women are much more likely than men to agree that the government should play a stronger role in regulating AI and protecting jobs, even if it requires slowing down the development of new technologies.Data for Progress & Groundwork Collaboration. (n.d.). [Survey Data] Retrieved 6 April 2026, from https://www.filesforprogress.org/datasets/2026/3/dfp_gwc_ffp_25_08_17_tabs.pdf
In the absence of comprehensive federal action, states have stepped in to fill the regulatory gap. Several states have enacted laws addressing AI-related workplace issues, including protections against algorithmic bias in hiring, requirements for transparency in automated employment decisions and safeguards for workers subject to algorithmic management.Building an AI-Ready America: Adopting AI at Work: Hearing before the Subcommittee on Health, Employment, Labor, and Pensions, of the House Education and Workforce Committee, 119th Cong. (2026) (testimony of Tanya L. Goldman). Retrieved 6 April 2026, from https://edworkforce.house.gov/uploadedfiles/goldman_testimony.pdf These state-level efforts represent important progress in establishing baseline protections for workers navigating AI-enabled workplaces.
The Trump administration and some members of Congress, however, are still pushing for a federal framework that would preempt or place a moratorium on state AI laws.Breuninger, K. (2026, March 20). Trump administration unveils national AI policy framework to limit state power. CNBC, Retrieved 6 April 2026, from https://www.cnbc.com/2026/03/20/trump-ai-policy-framework.html Such legislation would prevent states from enacting new protections and could nullify existing state safeguards – leaving workers vulnerable when employers deploy discriminatory and exploitative AI systems in the workplace.Steffens, S. & Sanders, S. (2025, November). How Banning State Regulation of AI Harms Workers. Retrieved 6 April 2026, from We Build Progress website: https://webuildprogress.org/how-banning-state-regulation-of-ai-harms-workers
Workers need federal and state guardrails. Federal standards are essential to establish consistent baseline protections across all states. At the same time, given the rapid pace of AI development and deployment, states must retain the flexibility to respond quickly to emerging harms and test innovative policy solutions. A federal preemption that blocks state action would be devastating for worker protections at precisely the moment when AI is fundamentally reshaping the workplace.
Policy Recommendations
People are very concerned about the impacts of AI on their jobs and lives and want stronger guardrails. A majority of voters say AI’s risks do not outweigh its benefits,Smith, A. (2026, March 10). Poll: Majority of voters say risks of AI outweigh the benefits. NBC News. Retrieved 6 April 2026, from https://www.nbcnews.com/politics/politics-news/poll-majority-voters-say-risks-ai-outweigh-benefits-rcna262196 and majorities of both parties support “more regulation to limit its potential negative impact on society.”Cousens, M., Smith, I. & Russell, R. (2025, December). Views of AI and Data Centers. Retrieved 6 April 2026, from Navigator Research website: https://navigatorresearch.org/views-of-ai-and-data-centers/ Another poll similarly found strong support among workers for stronger guardrails around the use of AI or algorithmic management in the workplace.Scherer, M. Negron, W. & Schwartz, L. (2025). What Do Workers Want? A CDT/Coworker Deliberative Poll on Workplace Surveillance and Datafication. Retrieved 6 April 2026, from Center for Democracy & Technology website: https://cdt.org/wp-content/uploads/2025/03/CDT-Report-Deliberative-Polling-final.pdf
History demonstrates that self-regulation and voluntary initiatives are insufficient for protecting and promoting women workers. Discrimination and other harms proliferate in the absence of concrete rights and enforceable protections. This is a critical moment for policymakers and advocates to engage women workers directly about what they want and need as AI reshapes their work and livelihoods and to ensure that women are centered in AI policies that might disproportionately harm them.
While significant uncertainties remain about the full scope of impacts women workers may face, the following policy directions warrant consideration.
Enforce existing anti-discrimination and other worker protection laws
Many foundational workplace laws already provide workers with baseline protections from discrimination and other harms arising from AI, surveillance and automated management systems – including misclassification (when employees are classified as independent contractors), wage theft, interference with organizing and collective bargaining, and unsafe working conditions. However, the agencies that enforce these laws are chronically underfunded and understaffed. Congress should ensure that the Equal Employment Opportunity Commission (EEOC), the National Labor Relations Board, and the Department of Labor’s Wage and Hour Division and Occupational Safety and Health Administration have the resources they need and are actively enforcing the law and protecting workers’ access to remedies.
Under the Biden Administration, the EEOC launched an initiative on AI and algorithmic fairness, issued guidance on AI selection procedures and published resources for workers on disability discrimination and the use of software.Economic Policy Institute & Workshop. (2025, September). Database of Biden Administration Actions on AI. Retrieved 6 April 2026, from https://www.epi.org/database-of-biden-administration-actions-on-ai/; U.S. Equal Employment Opportunity Commission. (2022). The Americans with Disabilities Act and the Use of Software, Algorithms, and Artificial Intelligence to Assess Job Applicants and Employees. Retrieved 6 April 2026, from https://perma.cc/79ZC-37XZ The current administration has actively removed this guidance and curtailed enforcement, including dismissing every complaint based on disparate impact theories, a critical tool for addressing employment discrimination enabled by AI.Bains, C. (2026, January). When Machines Discriminate: The Critical Role of Disparate Impact in AI Accountability. Retrieved 6 April 2026 from The Leadership Conference on Civil and Human Rights website: https://civilrights.org/disparate-impact-ai/
Establish strong federal protections for workers in AI-enabled workplaces, while preserving state authority to regulate AI
In some cases new worker protections will be needed to address harms from AI use. These should include:
Transparency and disclosure
Requirements for AI developers and employers to disclose how systems are trained, what data they use, how decisions are made, and what safeguards exist to prevent bias and protect worker privacy. Requirements for employers to notify workers when AI systems are used to make decisions affecting their employment, along with meaningful avenues to contest automated decisions.
Human-in-the-loop
Significant decisions that impact workers should be made with human oversight. Workers should be able to understand, review, and appeal automated decisions, without fear of retaliation. Decision-makers need training and understanding of how the algorithm operates and its limitations.
Pre- and post-deployment assessments
Employers using AI for hiring, performance evaluation, scheduling, wage determination, or worker management should be required to conduct regular bias and discrimination assessments, with results made available to workers and regulators.
Establish legal standards that address accountability for developers, deployers and platforms for AI-enabled harassment
In addition to worker protections, legal frameworks should address AI-enabled harassment and violence against women and girls.Mingeirou, K., Osman, Y. & Rafin, R. (2026, February). The Impact of Artificial Intelligence on Violence Against Women and Girls. Retrieved 8 April 2026, from Stimson Center website: https://www.stimson.org/2026/the-impact-of-artificial-intelligence-on-violence-against-women-and-girls/#elementor-toc__heading-anchor-0
Center gender equity and intersectional analysis in AI regulation and research and promote gender diversity in the AI workforce
Regulatory frameworks should explicitly address how AI systems disproportionately harm women workers, particularly women of color and immigrant women, and disabled workers. Policymakers should promote accessible AI literacy training for women workers while also addressing workers’ legitimate concerns. They should also support initiatives to recruit, retain and advance women – particularly women of color – in AI development, engineering and leadership, to ensure diverse perspectives shape how these technologies are designed and deployed.
Center worker voice in AI governance
Policymakers should strengthen collective bargaining rights, require meaningful worker consultation in AI deployment decisions and support worker organizing in AI-transformed sectors.
Strengthen social insurance for labor market transitions
This includes robust unemployment insurance and benefits that can support workers navigating AI-driven displacement and disruption. Particular attention should be paid to the likely disproportionate impacts on women and women of color in AI-vulnerable occupations.
Conclusion
The integration of AI into workplaces is not a predetermined process with inevitable outcomes. Policy choices will shape whether AI transitions exacerbate or reduce gender inequalities at work. As AI capabilities evolve, ongoing research and policy adaptation will be essential. Women workers must be centered in both research and policy development. Their experiences, needs, and insights should drive our collective response to AI’s transformation of work.
Methods note
This brief calculates the race, ethnicity and gender makeup of the top 15 AI-vulnerable occupations identified by Manning & Aguirre (2026) using their estimates of workforce AI-adaptability index by occupation and estimates of AI exposure by Eloundou et al. (2024). Our demographic estimates analyze 2019-2023 American Community Survey 5 Year Estimates Microdata via IPUMS USA, University of Minnesota, www.ipums.org. Sample includes all respondents ages sixteen and older who worked in the last twelve months. We use this larger universe to capture both those currently employed in a given occupation, as well as those who recently worked in an occupation but are not currently in the labor force. This allows us to include respondents who may have exited the labor force due to AI-related disruptions or broader labor market volatility. Race categories exclude those who identify as Hispanic. Latinas may be of any race.
The authors are grateful to Mary Akinrogbe, Mary Beech, Jocelyn Frye, Sharita Gruberg, Mettabel Law, Jessica Mason, Brittany Williams and Gail Zuagar for their thoughtful comments and support on this brief.

