Joy Buolamwini’s ‘Unmasking AI’ unearths the bias in AI technologies and aims to hold Big Tech to account

News Room

Joy Buolamwini‘s debut book, “Unmasking AI: My Mission to Protect What Is Human in a World of Machines,” is a triumph in a literary work about artificial intelligence that isn’t boring, highfalutin, or dry.

Buolamwini’s autobiographical work offers an origin story of AI technology as well as an account of the author’s path. Its striking accessibility mirrors the meteoric rise of AI into the mainstream. The Massachusetts Institute of Technology researcher flexes her prose and occasionally documents her journey in an almost scientific telling of her early life in rural Mississippi with Ghanaian Canadian parents who encouraged her intellectual curiosity. Buolamwini would later go on to advise Congress and President Joe Biden on ethical and safe AI policy.

“I wanted to write this book now to invite more people into the conversation so it’s not just that you have to be in the tech industry. You certainly don’t need a Ph.D. from MIT to be part of the conversation,” Buolamwini told Business Insider. “Because these technologies are shaping everybody’s life in many important ways, I wanted a book that was an invitation to say, ‘You’re part of the conversation too, here is the context, but most importantly, here’s what it means to you,'”

Buolamwini’s curiosities were nurtured first at Georgia Tech, and she later journeyed through the gilded halls of MIT, where she earned her Ph.D. Her experiences as a student defined her discovery of what she calls the coded gaze: “the ways in which the priorities, preferences, and prejudices of those who have the power to shape technology can propagate harm, such as discrimination and erasure.”

Many popular technologies are built with and perpetuate this exclusionary gaze, leading to detrimental consequences. Buolamwini especially documents the coded gaze applied to facial-recognition AI technologies, which serve as the subject for much of Buolamwini’s research.

At varying places in the book, I relived my excitement and eventual frustration as a young tech reporter covering the harm that facial recognition has wreaked on communities that looked like mine and were uncomfortably close to home.

Buolamwini also tells the story of a Brooklyn apartment complex.

There, 90% of tenants were people of color, mostly women and older adults — all groups that facial recognition has been scientifically proven to be less accurate on. When the building owner sought to use facial recognition as a means for tenants to access their homes, it was met with strong opposition, and Buolamwini wrote an amicus letter of support for their cause. The tenants eventually succeeded in fighting this encroachment on their rights.

In the book’s more personal and vulnerable anecdotes, we see the nexus of Buolamwini’s lived experience, expertise, and activism.

From early in her childhood, when she was misgendered and when other kids breathed a sigh of relief for not having skin as dark as hers, to a guard reaching for a gun at the sight of her at Davos, to the choice of a news network to feature a white male AI expert over her for a television segment, readers can feel the misogynoir that has cast a shadow on Buolamwini’s life and achievements.

“When a guard reached for a gun as a friend tried to drop me off at a hotel for designated badge holders, I was reminded that my inclusion in certain spaces was the exception to overall exclusion,” Buolamwini writes of her experience at the World Economic Forum.

Yet, through the exceptional quality of Buolamwini’s life and research, her story becomes all the more human.

At one point in the book, I was moved to tears by the story of Buolamwini writing a letter to the Olympic gold medalist Simone Biles after she withdrew from events at the 2020 Olympic Games, citing safety.

In the letter, Buolamwini marveled at Biles’ ability to “put herself above the weight of gold.” Her actions may have been an even harder feat as a Black woman who’s often praised for her ability to endure rather than care for herself.

“When you’re one of one or one of few, there’s this extra pressure we put on ourselves because of the level of scrutiny we know people in our position get,” Buolamwini told BI.

“If you’re already internally a perfectionist and externally you already know there will be a micro comb through your actions, there is another type of self-monitoring and self-presentation that’s adding to the wear and tear of the work itself,” she added. “I realized that was an unfair burden I was putting on myself in reaction to a world that would not generally position me as the expert.”

After finishing the book, readers come away with a better understanding of AI. The gives details about certain companies, such as IBM’s proactive approach to working with Buolamwini to fix an issue after its facial-recognition tool was shown to have a higher inaccuracy when applied to Black women. Amazon’s response was to push back against Buolamwini’s research, which was then replicated and proven by the National Institute of Standards and Technology.

Buolamwini shared her intention as she reflected on the process of writing the book and its namesake.

“The title, ‘Protecting what is human in a world of machines,’ part of it is our expression, our process of creativity; not just our process of the book, but what I learned in doing that,” Buolamwini said. “Protecting our essence; protecting what’s human is protecting our essence. Let’s be still and recognize what makes us human.”

Read the full article here

Share this Article
Leave a comment