Image credits: miranda bogen
To give female academics and others focused on AI their well-deserved and overdue spotlight time, TechCrunch is launching a series of interviews highlighting notable women who have contributed to the AI revolution. Start. As the AI boom continues, we’ll be publishing several articles throughout the year highlighting key research that may go unrecognized. Click here for a detailed profile.
Miranda Bogen is the founding director of the AI Governance Lab at the Center of Democracy and Technology, where she works to help create solutions that can effectively regulate and govern AI systems. She helps lead responsible AI strategies at Meta and previously worked as a senior policy analyst at Uptown, an organization that aims to use technology to advance equity and justice.
In short, how did you get started in AI? What attracted you to the field?
I was drawn to working on machine learning and AI because I saw how these technologies collide with society—fundamental conversations about values, rights, and which communities are left behind. It was after that. My early research exploring the intersection of AI and civil rights made me acutely aware that AI systems are much more than technological artifacts. They are systems that shape and are shaped by interactions with people, bureaucracies, and policies. I’ve always been good at translating between technical and non-technical contexts, breaking down the façade of technical complexity to help communities with different types of expertise understand how AI is built. I was energized by the opportunity to help shape it from the ground up. .
What work (in the AI field) are you most proud of?
When I first started working in this field, many people still needed to be convinced that AI systems could have discriminatory effects on marginalized people, much less harm them. I didn’t think I needed to do anything about it. While there remains a significant gap between the current state of affairs and a future in which bias and other harms are systematically addressed, the research my collaborators and I have conducted on discrimination in personalized online advertising and the industry’s I am satisfied with what my research on the algorithm has achieved. Equity has led to meaningful changes to Meta’s ad delivery system and progress toward reducing disparities in access to important economic opportunities.
How do we overcome the challenges of a male-dominated tech industry and, by extension, a male-dominated AI industry?
I have been fortunate to work with amazing colleagues and teams who have been generous with opportunities and genuine support. And we tried to bring that energy into every room we were in. In my recent career transition, I was happy to say that I almost achieved my goals. All of my options included working within teams and organizations led by phenomenal women. And I hope the field continues to elevate the voices of those who have not traditionally been at the center of technology-oriented conversations.
What advice would you give to women looking to enter the AI field?
It’s the same advice I give to anyone who asks. Find supportive managers, advisors, and a team that energizes and inspires you, values your voice and perspective, and stands up for you and your work.
What are the most pressing issues facing AI as it evolves?
The impacts and harms that AI systems are having on people are already well known by now, and one of the biggest pressing challenges is not only to explain the problems, but also to systematically address and address those harms. The goal is to develop a robust approach to accelerate its adoption. We launched the AI Governance Lab at CDT to drive progress in both directions.
What issues should AI users be aware of?
In most cases, AI systems do not yet have seat belts, airbags, or traffic signs, so proceed with caution before using them for critical tasks.
What is the best way to build AI responsibly?
The best way to build AI responsibly is with humility. Think about how success is defined for the AI system you’re working on, who that definition serves, and what context is missing. Think about who the system could fail for and what would happen if it did. And we build systems that include not only the people who use them, but also the communities they serve.
How can investors more effectively promote responsible AI?
Investors need to create room for technology developers to tread more carefully before rushing half-baked technologies to market. Intense competitive pressure to release the latest, biggest, and brightest new AI models has raised concerns about a lack of investment in responsible practices. While unrestrained innovation sings a seductive siren song, it is a mirage that leaves everyone worse off.
AI is not magic. It’s just a mirror held up to society. If you want it to reflect something different, you have work to do.