“I want to give a call to action to people who believe diversity is important. Because it is an emergency, and we have to do something about it now.” — Timnit Gebru
When I first learned about Timnit Gebru’s work, I was struck with awe. I could feel the sense of urgency that underpins Timnit Gebru’s ideas and commitment as an AI Ethics researcher to increasing diversity in the field of AI. Her message is clear: it’s imperative that AI practitioners take meaningful action to center and elevate the work of Black researchers in AI. And this needs to happen now.
In addition to her work as a prominent AI Ethics researcher, Timnit Gebru is also the cofounder of Black in AI, an international initiative that mentors and supports Black students and researchers in AI through workshops, events, collaboratives and more.
Timnit Gebru’s research and advocacy directly address the dangers of AI when it lacks a critical analysis of power imbalance and when ethics and fairness are not an essential piece of the equation.
Timnit Gebru’s powerful voice, call to action and work have been inspiring to me for so many reasons. As a Latina student in STEM, Timnit’s work has inspired me to dream that I can engage data and computing in ways that maintain strong ties to issues of diversity and power. Her work is evidence that intersectional approaches must be valued in the development of new technologies. The vulnerability and conviction that I hear in her voice have also inspired me to work on making my own messages clear and assertive.
In today’s blog, I want to share some of Timnit Gebru’s research and impact with you that has inspired me as a student. For an in-depth review of Timnit’s work, please take a look at Rachel Thomas’ article over at The Gradient.
Bias in Facial Recognition Software
Timnit Gebru coauthored a prominent 2018 paper along with Joy Buolamwini, that showed facial recognition software perpetuated racial and gender bias. The study has been cited all around the world and helped to support the work of social good advocates across various disciplines. It has been particularly impactful in highlighting the errors and ethics of implementing these technologies in the criminal justice system by showing their disproportionate harm on Black communities.
In education, Timnit and Joy’s contributions have made a significant impact on the remote test taking experiences of students of color. Their findings have led schools (including mine) to reconsider and revoke their implementation of Proctorio and other test taking software. These procedural changes inside schools have been pushed for by students and educators alike, whose approaches are partly grounded in concerns around the effectiveness of and racial bias in test taking software that applies facial recognition technology.
These advocacy efforts will mean that students can focus a little bit better on taking their tests, without being as worried about issues of privacy and bias perpetuated by their remote artificial test proctor.
Timnit and Joy’s Gender Shades paper and initiative is powerful in its centering of an intersectional and critical analysis of AI. Their work has not only changed the world in fundamental ways, but also paved the way for a critical analysis of data and tech that perpetuates bias.
Accountability and Transparency in Data
Since 2018, Timnit Gebru and her coauthors have widely circulated their paper titled, “Datasheets for Datasets.” The paper puts forth a set of questions that helps researchers to document important details for the datasets they are creating. The initiative encourages “…the machine learning community to prioritize transparency and accountability.” The questions ask that researchers document their motivations for creating their datasets, their data collection processes, plans for distribution of the dataset and more. Some example questions:
- For what purpose was the dataset created?
- Who created the dataset (e.g., which team, research group) and on behalf of which entity (e.g., company, institution, organization)?
Datasheets for Datasets breaks away from Western conceptualizations and implementations of positivist research approaches by giving visibility to the direct link between datasets and their creators.
The practical and theoretical implications of Datasheets for Datasets are far-reaching. As a student, Datasheets for Datasets has inspired me to study how I may build a vision in my research that analyses my motivations, bias and give visibility to my standpoint as I continue to engage data work.