A few weeks ago, New York City proposed rules for implementing its Automated Employment Decision Tool (AEDT) Law, which will regulate the use of AI in screening job candidates and making hiring decisions. Slated to go into effect early next year, the AEDT Law would require all NYC-based companies and employment agencies to ensure that AI solutions used for recruitment purposes are free from bias and discrimination.
The law is a long-awaited acknowledgement of the growing concerns around the issue of algorithmic bias—bias that may consciously or unconsciously be programmed into AI models, and causing AI applications to make decisions that are biased or discriminatory against individuals.
As an example, consider how a prominent technology company was forced to scrap its internal AI recruitment tool after the AI unfairly downrated job applications by female candidates.
As AI applications become more commonplace and wield more influence over our day to day, more and more instances of algorithmic bias are beginning to surface. From healthcare algorithms discriminating against patients of color to AI tools recommending harsher penalties for minority defendants in the criminal justice system, algorithmic bias can drive unequal outcomes and impact individual lives in irreversible ways.
A highly regarded voice on ethical AI and one of the most prominent researchers of algorithmic bias, Timnit Gebru, has published groundbreaking research and spoken to packed audiences around the world. Until late 2020, Timnit was the co-leader of the Ethical AI team at one of the world’s largest companies before she was famously fired over a paper that highlighted bias in AI, according to the New York Times.
Since then, Timnit—the cofounder of Black in AI—has published and launched the Distributed AI Research Institute (DAIR), which provides a space for independent and community-rooted AI research. More recently, she was listed on TIME’s list of the 100 Most Influential People of 2022.
We are thrilled to announce that Dr. Timnit Gebru will be joining us at Relativity Fest in Chicago this month. She’ll offer our closing keynote on bias in AI based on her seminal work in this crucially important yet overlooked arena of AI research.
Working to Bring Ethics to AI
The founder and executive director of the DAIR Institute, Timnit received her PhD from Stanford University, and did a postdoc at Microsoft Research, New York City in the FATE (Fairness Accountability Transparency and Ethics in AI) group, where she studied algorithmic bias and the ethical implications underlying projects aiming to gain insights from data.
A profile in TIME earlier this year outlined Timnit’s career path, which has stirred controversy as well as shined a light on the complex realities of artificial intelligence.
TIME writer Billy Perrigo highlighted how Timnit got her start on this path:
By the time she left Stanford, Gebru knew she wanted to use her new expertise to bring ethics into this field, which was dominated by white men. She says she was influenced by a 2016 ProPublica investigation into predictive policing, which detailed how courtrooms across the U.S. were adopting software that offered to predict the likelihood of defendants reoffending in the future, to advise judges during sentencing. By looking at actual reoffending rates and comparing them with the software’s predictions, ProPublica found that the AI was not only often wrong, but also dangerously biased: it was more likely to rate Black defendants who did not reoffend as “high risk,” and to rate white defendants who went on to reoffend as “low risk.” The results showed that when an AI system is trained on historical data that reflects inequalities—as most data from the real world does—the system will project those inequalities into the future.
AI may be poised to reshape entire industries and unlock efficiencies and productivity at an unprecedented scale, but, as this example shows, it could have catastrophic consequences on under-represented groups in the absence of close human oversight. Indeed, unless the right set of controls and ethical frameworks are observed in building and implementing AI models, AI will perpetuate and amplify the same biases that it learns from historical data.
This is partly why the DAIR Institute exists: to dig into these failings and promote the idea that AI’s future can be brighter when it’s developed with appropriately diverse and deliberate perspectives.
The TIME article continues:
Gebru sees her research institute DAIR as another organ within this wider push toward tech that is socially responsible, putting the needs of communities ahead of the profit incentive and everything that comes with it. At DAIR, Gebru will work with researchers around the world across multiple disciplines to examine the outcomes of AI technology, with a particular focus on the African continent and the African diaspora in the U.S. One of DAIR’s first projects will use AI to analyze satellite imagery of townships in South Africa, to better understand legacies of apartheid. DAIR is also working on building an industry-wide standard that could help mitigate bias in data sets, by making it common practice for researchers to write accompanying documentation about how they gathered their data, what its limitations are and how it should (or should not) be used.
Learn More at Relativity Fest
Timnit’s session, titled “Ethical AI: A Masterclass by Dr Timnit Gebru,” will be the closing keynote of the conference; it will take place on October 28 at 1:45 p.m. Central time at the Hyatt Regency in Chicago. You can learn more about it and register for Relativity Fest 2022—to attend either in person or remotely—here.
In the course of her talk, Timnit will deconstruct the issue of algorithmic bias, examine its detrimental effects on societies and individuals, and articulate the collective moral responsibility to pave the road for ethical AI. In the meantime, you can find Timnit on Twitter @timnitGebru.