The news is by your side.

AI now belongs to the capitalists

0

What happened at OpenAI over the past five days can be described in many ways: a juicy governance drama, a tug-of-war for one of America’s biggest startups, a clash between those who want AI to advance faster and those who want it slower to go. it down.

But most importantly, it was a battle between two dueling visions of artificial intelligence.

In one view, AI is a transformative new tool, the latest in a series of world-changing innovations including the steam engine, electricity and the personal computer, and which, if used properly, could usher in a new era of innovation. creating prosperity and sums of money for the companies that exploit its potential.

In another view, AI is something closer to an alien life form – a leviathan summoned from the mathematical depths of neural networks – that must be stopped and deployed with extreme caution to prevent it from taking over and killing us all.

With the return of Sam Altman on Tuesday to OpenAI, the company whose board of directors fired him as CEO last Friday, the battle between these two views appears to be over.

Team capitalism has won. Team Leviathan lost.

OpenAI’s new board will, at least initially, consist of three people: Adam D’Angelo, Quora’s CEO (and the only remnant of the old board); Bret Taylor, a former executive at Facebook and Salesforce; and Lawrence H. Summers, the former Secretary of the Treasury. From there, the board is expected to grow further.

OpenAI’s largest investor, Microsoft, is also expected to have a greater voice in OpenAI’s governance in the future. This could include a board seat.

Three of the members who pushed for Mr. Altman’s ouster have disappeared from the board: Ilya Sutskever, OpenAI’s chief scientist (who has since rescinded his decision); Helen Toner, director of strategy at Georgetown University’s Center for Security and Emerging Technology; and Tasha McCauley, an entrepreneur and researcher at the RAND Corporation.

Mr. Sutskever, Ms. Toner and Ms. McCauley are representative of the kinds of people who were heavily involved in thinking about AI a decade ago — an eclectic mix of academics, Silicon Valley futurists and computer scientists. They viewed the technology with a mix of fear and awe, worrying about theoretical future events such as the “singularity,” a point at which AI would surpass our ability to control it. Many were affiliated with philosophical groups such as the Effective Altruists, a movement that uses data and rationality to make moral decisions, and were persuaded to work with AI out of a desire to minimize the technology’s destructive effects.

This was the atmosphere around AI in 2015, when OpenAI was founded as a nonprofit, and it helps explain why the organization retained its complicated governance structure – giving the nonprofit’s board the ability to monitor the company’s activities and to replace the leadership – even afterward. started a for-profit branch in 2019. At the time, protecting AI from the forces of capitalism was seen by many in the industry as a top priority, one that needed to be codified in corporate bylaws and charter documents.

But a lot has changed since 2019. Powerful AI is no longer just a thought experiment; it exists in real products, like ChatGPT, used by millions of people every day. The world’s largest technology companies are racing to build even more powerful systems. And billions of dollars are being spent to build and deploy AI within companies, hoping to reduce labor costs and increase productivity.

The new board members are the kind of business leaders you would expect to oversee such a project. Mr. Taylor, the new chairman, is a veteran Silicon Valley dealmaker who led the sale of Twitter to Elon Musk last year when he was chairman of Twitter’s board of directors. And Mr. Summers is the Ur-capitalist – a leading economist who said it that he believes technological change is a “net good” for society.

There may still be voices of caution on the reconstituted OpenAI board, or figures from the AI ​​safety movement. But they won’t have veto power, or the ability to effectively shut down the company at a moment’s notice, as the old board did. And their preferences will be balanced with those of others, such as those of the company’s executives and investors.

That’s a good thing if you’re Microsoft, or one of the thousands of other companies that rely on OpenAI’s technology. More traditional governance means less risk of a sudden explosion, or of a change that would force you to quickly switch AI providers.

And perhaps what happened at OpenAI – a triumph of corporate interests over concerns about the future – was inevitable, given the increasing importance of AI. A technology that could potentially usher in a Fourth Industrial Revolution was unlikely to be controlled in the long term by those who wanted to slow it down – not with so much money at stake.

There are still some traces of the old attitude in the AI ​​industry. Anthropic, a rival company founded by a group of former OpenAI employees, has set itself up as a public benefit corporation, a legal structure intended to protect the company from market pressure. And an active open-source AI movement has advocated for AI to remain free from corporate control.

But these are best seen as the last vestiges of the old AI era, in which the people who built AI viewed the technology with both wonder and fear and sought to limit its power through organizational governance.

Now the utopians are in the driver’s seat. Full speed ahead.

Leave A Reply

Your email address will not be published.