Silicon Valley sits between two major San Francisco Bay Area mountain ranges. On the west is Santa Cruz, or in Spanish, Holy Cross. On the east, this mountain range is called Mt. Diablo, or Mountain of the Devil.
It is not lost on many that Silicon Valley sits between geographical good and evil and that the technology it creates can be used for both purposes.
Over the decades of tech creation, many companies have struggled with balancing the impact of their technology for positive and practical purposes versus its potential to be used by bad actors for nefarious reasons.
One of the most significant good versus evil conundrums was exemplified by tech’s role in creating the atomic bomb. While its impact ended the war with Japan in 1945, it did so at the cost of killing close to 200,000 people in Hiroshima and Nagasaki. Some years back, while I was in Hiroshima, I visited its memorial park and museum and saw pictures of the horrifying aftermath of the nuclear bombs’ impact on this region.
We are now at a similar crossroads of creating a new generation of technology that sits between good and evil. While AI has great promise for good, it also has great potential for evil.
This issue is debated at the highest levels by government officials, academics, and companies in AI and who use AI in applications and services today.
This particular conflict was an issue from the recent upheaval at OpenAI. Its board’s role was to guide OpenAI’s need to harness its goals and ambitions within the borders of altruism. On the other side was the company’s for-profit side, which was plodding ahead with less forethought of how AI could be used for evil.
Many articles have been written recently about the drama at OpenAI and the firing and rehiring of Sam Altman as CEO. What emerged over the last week was the weak structure of its board and how it governed the company.
Now that Sam Altman has returned to OpenAI, the board is reorganized, and we will likely see an executive from Microsoft join this board eventually to protect its 49% ownership in the company. New board members now include former Salesforce co-CEO Bret Taylor and Larry Summers, former Secretary of the Treasury and president of Harvard University.
Sam Altman and Microsoft, in general, will benefit most from this new restructuring, and the for-profit division will become more critical to the company. However, Altman and the board must maintain sight of the need to do as much as possible to create new AI technology responsibly.
Two board members who left the board, tech entrepreneur Tasha McCauley and Helen Toner, director of strategy and foundational research grants at Georgetown’s Center for Security and Emerging Technology, were altruistic watchdogs. But their departure leaves a hole in the company’s good versus evil champions.
It is too early to tell how this restructuring will ultimately impact OpenAI’s future, although it does appear it is on better footing. Yet, it must maintain sight of the need to implement restraint to ensure that what OpenAI creates and delivers, in the end, keeps an eye on the need for factoring altruistic guidelines in all of its product strategies.
Read the full article here