Image credits: Mutare Nkonde
To give female academics and others focused on AI their well-deserved and overdue spotlight time, TechCrunch is launching a series of interviews highlighting notable women who have contributed to the AI revolution. Start. As the AI boom continues, we’ll be publishing several articles throughout the year highlighting key research that may go unrecognized. Click here for a detailed profile.
Mutare Nkonde is the founding CEO of AI For the People (AFP), a nonprofit organization that aims to increase the number of Black voices in the technology industry. Prior to this, she helped introduce the No Biometric Barriers to Housing Act, as well as the Algorithms Act and the Deepfake Algorithms Act, into the U.S. House of Representatives. She is currently a Visiting Policy Fellow at the Oxford Internet Institute.
In short, how did you get started in AI? What attracted you to this field?
I became interested in how social media works after a friend posted in 2015 that Google Pictures, the predecessor to Google Image, had labeled two black people as gorillas. I was in a lot of “black tech” circles and we were furious, but it wasn’t until Weapons of Math Disaster was published in 2016 that this was due to algorithmic bias. I didn’t even begin to understand that. This led me to start applying for fellowships that would allow me to research this further, and ended my role as a co-researcher. She is the author of the 2019 report “Improving Racial Literacy in Technology.” This caught the attention of people at the MacArthur Foundation, and my current career began.
I was drawn to questions about racism and technology because they seemed understudied and counterintuitive. I like doing things that others don’t do, so learning more and spreading that information within Silicon Valley seemed like a lot of fun. Since increasing racial literacy in technology. I started a nonprofit organization called AI for the People that focuses on advocating for policies and practices to reduce manifestations of algorithmic bias.
What work (in the AI field) are you most proud of?
I’m really proud to be the lead advocate for the Algorithmic Accountability Act, which was first introduced in the House of Representatives in 2019. The Act requires AI for People to design, deploy, and govern AI systems in compliance with local anti-discrimination laws. This has led us to join Schumer AI Insights Channels as part of an advisory group for various federal agencies, with exciting work ahead.
How do we overcome the challenges of a male-dominated tech industry and, by extension, a male-dominated AI industry?
In fact, I had even more problems with academic gatekeepers. Most of the men I work with at technology companies develop systems for black people and other non-white people, and they are very easy to work with. The main reason for this is that I am acting as an outside expert who can examine or challenge existing practices.
What advice would you give to women looking to enter the AI field?
Find your niche and become one of the best people in the world in your field. He did two things that helped me build credibility. First, while people in academia were starting to discuss the issue, I was advocating for policies to reduce algorithmic bias. This gave me first-mover advantage in the “solution space” and made AI for the People the authority on the hill five years before his executive order. The second thing I would say is to look at your shortcomings and deal with them. Four years into AI for the People, I have the academic credentials I need to avoid being pushed out of my thought leadership position. I can’t wait to graduate with my master’s degree from Columbia University in May. I would like to continue my research in this field.
What are the most pressing issues facing AI as it evolves?
I’m thinking hard about strategies we can pursue to involve more Black people and people of color in building, testing, and annotating foundational models. This is because technology is only as good as its training data, DEI is under attack, Black venture funds are sued for targeting Black and female founders, and Black academics are publicly attacked. How do you create a comprehensive dataset when you’re in the midst of a pandemic? , who in this industry will do this job?
What issues should AI users be aware of?
We should think about the development of AI as a geopolitical issue, and how the United States can become a leader in truly scalable AI by developing products with high efficacy rates for people in all demographic groups. I think you should think about how you can become like that. This is because China, despite being the only other large-scale AI producer, produces products among a largely homogeneous population, despite having a large footprint in Africa . With aggressive investment in the development of anti-bias technology, the US technology sector could dominate that market.
What is the best way to build AI responsibly?
A multifaceted approach is needed, but one thing to consider is pursuing research questions that focus on people living on the periphery. The easiest way to do this is to note cultural trends and consider how they affect technological development. For example, how do we design scalable biometric technology in a society where more and more people identify as transgender or non-binary?
How can investors more effectively promote responsible AI?
Investors should look at demographic trends and ask themselves whether these companies can sell to a population that is increasingly black and dark-skinned due to declining birth rates in European populations around the world. This should encourage them to ask questions about algorithmic bias during their due diligence process, as this will increasingly become an issue for consumers.
There is much work to be done when it comes to reskilling the workforce for an era where AI systems perform low-risk, labor-saving tasks. How can we ensure that people living on the margins of our society can participate in these programs? What information is available about how AI systems do and do not work? Will it be possible? And how can we leverage these insights to ensure that AI is truly for people?