‘Coded Bias’ is the biggest AI movie you can watch today

Even before its release, Coded bias was positioned to become a must see for anyone interested in the debate on the ethics of AI. The documentary, which premiered on Netflix this week, is the kind of film that can and should be shown in countless high school classrooms, where the students themselves are. subjected to various AI systems in the post-pandemic era of Zoom. It’s a refreshing and digestible introduction to the myriad ways algorithmic bias has infiltrated all aspects of our lives – from racist facial recognition and predictive policing systems for scoring software who decides who has access to housing, loans, public assistance, etc.

But in the midst of the recent high level shots from Timnit Gebru and other members of Google’s AI ethics team, the documentary appears to be only part of a deeper, continuing story. If we understand algorithmic bias as a form of computationally imposed ideology, rather than an unfortunate rounding error, we cannot simply attack the symptoms. We need to question the existence of the racist and capitalist institutions that created these systems in the first place.

The film follows Joy Buolamwini, computer scientist and founder of the Algorithmic Justice League, an organization she created after realizing that facial recognition systems weren’t trained to recognize faces with darker skin. Buolamwini is easily one of the most important figures in the AI ​​field, and she serves as a gateway to a series of stories about how automation has forced a robotic and unfair world on us – albeit one that doesn’t. does that reflect and amplify the pre-existing injustices brought about by racism, sexism and capitalism.

Showing the real human impacts of algorithmic surveillance is always a challenge, but filmmaker Shalini Kantayya manages to navigate a series of deeply compelling portraits: of a famous teacher who was fired after receiving a low grade from a teaching tool. algorithmic evaluation, and a group of tenants. in Brooklyn who campaigned against their landlord after installing a facial recognition system in their apartment building, to name a few.

Perhaps the movie’s greatest feat is to tie all these stories together to highlight a systemic problem: It’s not just that the algorithms “don’t work”, it’s that they were built by the same. A cadre of predominantly male, predominantly white engineers who took the oppressive models of the past and deployed them on a large scale. As author and mathematician Cathy O’Neill points out in the film, we can’t understand algorithms – or technology in general – without understanding the asymmetric power structure of those who write code versus those who see themselves. impose code.

In discussions of AI, there is a tendency to view algorithmic bias as an innocent daisy that can be repeated. In reality, it is often people in positions of power who impose old and bad ideas like racist pseudoscience, using computers and math as a smokescreen to avoid liability. After all, if the computer says so, it must be true.

Given the systemic nature of the problem, the film’s ending looks disappointing. We see Buolamwini and others speaking at a pre-pandemic congressional hearing on AI and algorithms, bringing the issue of algorithmic bias to the highest seats of power. But given the long and ineffective history of congressional tsk-tsking tech CEOs like Mark Zuckerberg, I wondered how a hearing translates to justice – especially when injustice seems to be ingrained in corporate business models. technologies that shape our algorithmic future. .

Even more interesting is how the film’s timeline ends just before the firing (and subsequent smearing) of Timnit Gebru and other prominent AI ethics researchers at Google. Gebru, a famous data scientist who appears in the film, was fired last year after co-authoring an article that concluded that the large language models used in many AI systems had a significant environmental impact, as well. that “a risk of substantial damage, including stereotyping, denigration, heightened extremist ideology and wrongful arrests.”

In other words, the results were a rebuttal to Google’s core business model, which company management weren’t too interested in hearing. For many players in the field of AI ethics, the layoffs have demonstrated how racial capitalism—How women of color in the tech industry are simply tolerated to achieve the appearance of diversity, and eliminated when they question the white male power structure and its business model of endless surveillance.

If there is hope at the end of the film, it is in the brief mentions of grassroots activists who have campaign successfully to ban facial recognition in cities across the country. But ultimately the lessons we should learn from movies like Coded bias do not concern facial recognition, nor any particular algorithm or technology. It’s about how our company’s core operating system will continue to produce new, more harmful technology – unless we dismantle it and create something better to put in its place.

About Leah Albert

Check Also

Personal loans to become the most important segment of the Indian banking sector

Personal loans, including real estate and credit card debt, are expected to become the largest …

Leave a Reply

Your email address will not be published.