In a surprising turn of events, OpenAI’s board abruptly fired co-founder and CEO Sam Altman on Friday. Following a backlash on social media, the board appeared to be reconsidering its decision over the weekend, only to confirm that Altman was out early Monday morning. In turn, Altman will be leading a new AI research lab at Microsoft
MSFT
Altman has become the public face of the AI movement, thanks to ChatGPT’s massive success. His removal means chaos in the short term for OpenAI and others in the industry.
The real story, however, may be the OpenAI board’s concerns about AI safety, which in turn stem from the outsized influence of effective altruism in Silicon Valley. The safety of AI likely created a key rift between the board and CEO Altman.
EA is a philosophical framework rooted in utilitarianism, which aims to maximize the net good in the world. In theory, there’s little to dislike about EA, with its rationalist approach to philanthropy that emphasizes evidence over emotion. The problem is the movement’s leaders can be all too prone to ethical lapses, reflecting the very worst stereotypes of the movement’s critics.
For example, Sam Bankman-Fried, the disgraced FTX founder and devoted effective altruist, showed how EA’s earning to give philosophy—which promotes making a lot of money so that one can later give it away to charity—can easily turn into earning at any cost, even if it means defrauding millions of investors in the process. Similarly, EA leader and philosopher Peter Singer recently defended human-animal sexual relations on the social media platform X, highlighting the movement’s creepy connections to the most perverse corners of intellectual libertinism.
While the current crop of EA leaders may be comprised of people with dubious integrity and even harmful intent, utilitarianism has a long and storied history that has at times included great philosophers like Jeremy Bentham and John Stuart Mill. Utilitarianism sees the collective interest as superseding that of the individual, like sacrificing a single healthy person to harvest their organs and save five others.
The OpenAI board is comprised of current and former effective altruists, and debates over AI safety likely contributed to Altman’s removal, highlighting how the EA-friendly board was in tension with the business savvy and mostly profit-seeking Altman. OpenAI awkwardly straddles sectors, technically a non-profit, but with responsibilities to earn profits for some investors, like Microsoft. After launching ChatGPT and working with Microsoft, the board may have thought OpenAI had strayed too far from the nonprofit’s original mission of open and safe AI.
But even early AI safety proponents like technologist Nick Bostrom now avoid the extreme predictions that set off the doomsayers in the first place. Bostrom—who promotes “longtermism,” another key EA concept—apparently doesn’t want to associate himself with bloggers like Eliezer Yudkowsky, whose tweets predict the end of the world on a near-hourly basis.
Ultimately, the nonprofit, EA-influenced arm of OpenAI won out, but the company may well be destroyed in the process. Along with Altman, the president of OpenAI Greg Brockman and a number of top researchers have already fled the organization. The trickle may soon be a flood, as hundreds of OpenAI employees have signed onto a letter threatening to leave the company unless Altman is reinstated.
The whole episode demonstrates how nonprofits, which can be plagued by fickle directors, are often prone to straying from the public good, making rash decisions based on short-sighted impulses and bruised egos. Meanwhile, for-profits at least have a solid grounding in seeking to protect the investments of their shareholders. This focus on financial returns is like a compass that keeps for-profits guided toward their missions.
Effective altruism is a poor replacement as a lodestar guiding the nonprofit sector. The good aspects of EA, like its emphasis on evidence-based solutions, are not novel, and indeed there are plenty of alternatives that are more attractive in this regard. The bad aspects of EA, meanwhile, can appear irredeemable.
EA leaders have demonstrated that they are willing to defraud investors, push the boundaries of civilized behavior, and wreck some of society’s most innovative companies, all if it conforms with whatever myopic vision of the good is in the leaders’ heads at a particular moment.
Far from being a long-term worldview, EA is a short-sighted one. OpenAI’s board is not the worst set of EA practitioners. Nevertheless, this past weekend’s events capture how the movement tends to elevate people with serious blind spots to positions of prominence and influence.
Too many effective altruists are willing to resort to depravity if they believe it will do good over the long term. But what kind of precedent does this set? Why should we expect future effective altruists won’t sink to similar depths of harm, if all one needs is to concoct some self-serving justification?
The problem with utilitarianism more broadly as a philosophy is it incomplete. Doing the most overall good provides significant guidance, but it can’t be the whole story. Sacrificing one’s self for the long-run interests of a society cannot be the only principle upon which that society is built. Not only is this a recipe for misery, it is contrary to basic human nature. Self-interest, for better or worse, must also at some point enter the expected value calculation.
While OpenAI currently leads the race in AI, expect new leaders to emerge given the company’s internal turmoil. But the biggest stand should be against EA. However reasonable some aspects of this philosophy may be, charlatans attracted to the movement should create serious reservations about its moral authenticity. Too many of the tech industries’ worst leaders promote EA, revealing a rot that can eat away at one of America’s most innovative sectors.
With Altman’s ouster, it’s clear that EA’s corrupting influence has infected even admired companies like OpenAI. If OpenAI represents Silicon Valley’s moral compass, it appears we are all in for some rough times ahead.
Read the full article here