The news is by your side.

Silicon Valley is facing a grim new AI metric

0

This article is part of our special section on the DealBook Summit that brought together business and policy leaders from around the world.


Dario Amodei, the CEO of the AI ​​company Anthropic, puts his between 10 and 25 percent. Lina Khan, chair of the Federal Trade Commission, recently told me she’s at 15 percent. And Emmett Shear, who was OpenAI’s interim CEO for about five minutes last month, has said that floats somewhere between 5 and 50 percent.

I’m of course talking about p(doom), the sickening new statistic sweeping Silicon Valley.

P(doom) – which is math for “probability of doom” – is the way some artificial intelligence researchers talk about how likely they think AI will kill us all, or create some other catastrophe that will threaten our continued existence. threatens humanity. A high p(doom) means you think an AI apocalypse is likely, while a low p(doom) means you think we’ll probably survive.

Once an inside joke among AI nerds on online message boards, P(doom) has gone mainstream in recent months as the AI ​​boom sparked by ChatGPT last year has fueled widespread fears about how quickly AI is improving.

It’s become a common icebreaker among techies in San Francisco — and an inescapable part of AI culture. I’ve been to two tech events this year where a stranger asked for my p(doom) as casually as if they were asking for directions to the bathroom. “It comes up in almost every dinner conversation,” Aaron Levie, the CEO of the cloud data platform Boxtold me.

P(doom) even played a small role in last month’s OpenAI drama. After Mr. Shear was appointed interim leader of OpenAI, employees began circulating a clip of a recent podcast in which the director had stated that his p(doom) could be as high as 50 percent. Some employees feared he was a “doomer” and that he would try to delay or limit their work because it was too risky, a witness to the discussions said. (Eventually, OpenAI’s ousted CEO Sam Altman returned, so it didn’t matter.)

Sci-fi fans, of course, have theorized about robot takeovers for years. But after the release of ChatGPT last year, the threat started to seem more real. After all, if AI models won art prizes and… pass the bar examHow far away can a disaster be?

AI insiders also raised the alarm. Geoffrey Hinton, the prominent AI researcher who left Google this year and recently started warning about AI risks estimated that if AI were not highly regulated, there would be a 10 percent chance that it would lead to human extinction within the next thirty years. Yoshua Bengio, who along with Mr Hinton is considered one of the ‘godfathers of deep learning’, said an interviewer that he thought an AI catastrophe was about 20 percent likely.

Of course, no one knows whether it is 10 percent, 20 percent, or 85.2 percent likely that AI will kill us. And there are a lot of obvious follow-up questions, like: Would it still count as “doom” if only 50 percent of people died as a result of AI? What if no one died, but we all ended up unemployed and miserable? And how would AI actually take over the world?

But the point of p(doom) isn’t precision. It is intended to roughly assess where one stands on the spectrum from utopia to dystopia, and to convey in vaguely empirical terms that you have been thinking seriously about AI and its potential impact.

The term p(doom) seems to have originated more than a decade ago Less error, an online message board dedicated to the rationalist philosophical movement.

LessWrong’s founder, a self-taught AI researcher named Eliezer Yudkowsky, came up with the idea early on that a malicious AI could take over, writing about several AI disaster scenarios he envisioned. (Back then, AI could barely set a kitchen timer, so the risk seemed pretty small.)

Mr. Yudkowsky, who has since become one of the AI ​​world most famous doomers, told me he didn’t coin the term p(doom), although he helped popularize it. (He also said that if current AI trends continue, his p(doom) is “yes.”) The term was later adopted by members of the Effective altruism movementwho use logical reasoning to arrive at ideas about moral goodness.

My best guess is that the term was coined by Tim Tyler, a Boston-based programmer who used the term on LessWrong from 2009. In an email exchange, Mr. Tyler said he had used the term to “refer to the likelihood of doom, without being overly specific about the timescale or definition of ‘doom’.”

For some, talking about your p(doom) is just idle talk. But it has also become an important social signal in the debate raging in Silicon Valley between people who think AI is moving too fast and people who think it should go even faster.

Mr Levie, Box’s CEO, belongs to the more optimistic camp. He says his p(doom) is very low – not zero, but “about as low as it can be” – and he is betting that we will limit the big risks of AI and avoid the worst possible outcomes. His concern is not that AI will kill us all, but that regulators and lawmakers will use dire predictions of doom as a reason to crack down on a promising young sector.

“The overshoot is likely to happen if critical policy decisions are made far too early in AI development,” he said.

Another problem with p(doom) is that it is not clear what counts as good or bad odds when the stakes are existential. Are you really an AI optimist if, for example, you predict that there is a 15 percent chance that AI will kill every human on earth? (Put another way, if you thought there was “only” a 15 percent chance that the next plane you boarded would crash and kill everyone on board, would you get on the plane?)

Ajeya Cotra, a senior researcher at Open Philanthropy who studies AI risk has spent a lot of time thinking about p(doom). She thinks it has potential as a shorthand—her p(doom) is between 20 and 30 percent, for the record—but she also sees its limits. For starters, p(doom) doesn’t take into account that the likelihood of harm associated with AI depends largely on how we choose to control it.

“I know some people with ap(doom) over 90 percent, and it’s so high partly because they think companies and governments won’t engage in good security practices and policies,” she told me. “I know others who have an ap(doom) of less than 5 percent, and that is so low partly because they expect scientists and policymakers to work hard to prevent catastrophic damage before it happens.”

In other words, you could think of p(doom) as a kind of Rorschach test – a statistic that’s supposed to be about AI, but ends up revealing more about how we think about AI. peopleand our ability to leverage powerful new technology while managing its risks.

So, what’s yours?

Leave A Reply

Your email address will not be published.