The news is by your side.

Silicon Valley confronts the idea that the “singularity” is here

0

For decades, Silicon Valley anticipated the moment when a new technology would arrive and change everything. It would unite man and machine, probably for the better but possibly for the worse, and split history into for And after.

The name for this milestone: the Singularity.

It can happen in different ways. One possibility is that humans would add the processing power of a computer to their own innate intelligence, becoming supercharged versions of themselves. Or maybe computers would become so complex that they could actually think, creating a global brain.

In either case, the resulting changes would be drastic, exponential, and irreversible. A self-aware superhuman machine could design its own improvements faster than any group of scientists, causing an explosion of intelligence. Centuries of progress can happen in years or even months. The Singularity is a catapult to the future.

Artificial intelligence is a heady technology, business and politics like nothing in recent memory. Listen to the extravagant claims and wild claims from Silicon Valley, and it seems that the long-promised virtual paradise is finally at hand.

Sundar Pichai, Google’s usually low-key CEO, mentions artificial intelligence “deeper than fire or electricity or anything we’ve done in the past.” Reid Hoffman, a billionaire investor, says, “The power to make positive change in the world is about to get the biggest boost it’s ever had.” And Microsoft co-founder Bill Gates proclaims AI “will change the way people work, learn, travel, get healthcare and communicate with each other.”

AI is the ultimate new product rollout in Silicon Valley: transcendence on demand.

But there is a dark twist. It’s like tech companies introduced self-driving cars with the caveat that they could blow up before you get to Walmart.

“The advent of artificial general intelligence is called the singularity because it is so hard to predict what will happen next,” said Elon Musk, who runs Twitter and Tesla. told CNBC last month. He said he thought “an age of plenty” would emerge, but there was “some chance” of it “destroying humanity”.

The biggest cheerleader for AI in the tech community is Sam Altman, CEO of OpenAI, the start-up that sparked the current frenzy with its ChatGPT chatbot. He says AI will be “the greatest force for economic empowerment and many people become rich we have ever seen.”

But he also says Mr. Musk, a critic of AI who also started a company to develop brain-computer interfaces, may be right.

Apocalypse is familiar, even favorite territory for Silicon Valley. A few years ago, it seemed like every tech executive had a well-stocked apocalypse bunker somewhere remote but accessible. In 2016, Mr Altman said he was collecting “guns, gold, potassium iodide, antibiotics, batteries, water, Israeli defense force gas masks and a large piece of land in Big Sur that I can fly to”. The coronavirus pandemic has made tech preppers feel justified for a while.

Now they are preparing for the Singularity.

“They like to think they are sensible people making wise comments, but they sound more like monks in the year 1000 talking about the Rapture,” said Baldur Bjarnason, author of “The intelligence illusion”, a critical examination of AI. “It’s a little scary,” he said.

The Singularity’s intellectual roots go back to John von Neumann, a pioneering computer scientist who spoke in the 1950s of how “the accelerating advancement of technology” would lead to “an essential singularity in the history of the race”.

Irving John Good, a British mathematician who helped decipher the German Enigma device at Bletchley Park during World War II, was also an influential proponent. “Man’s survival depends on the early construction of an ultra-intelligent machine,” he wrote in 1964. Director Stanley Kubrick consulted Mr. Good about HAL, the benign, evil computer in “2001: A Space Odyssey” – an early example of the porous boundary between computer science and science fiction.

Hans Moravec, an adjunct professor at Carnegie Mellon University’s Robotics Institute, thought AI wouldn’t just be a boon to the living: The dead would also be reclaimed in the Singularity. “We would have a chance to recreate the past and interact with it in a real and direct way,” he wrote in “Mind Children: The Future of Robot and Human Intelligence.”

In recent years, entrepreneur and inventor Ray Kurzweil has been the leading champion of the Singularity. Mr. Kurzweil wrote “The Age of Intelligent Machines” in 1990 and “The Singularity Is Nearer” in 2005, and now writes “The Singularity Is Nearer”.

By the end of the decade, he expects computers to pass the Turing test and become indistinguishable from humans. Fifteen years later, he calculates, the true transcendence will come: the moment when “calculating will be part of ourselves and we will increase our intelligence a millionfold.”

By then, Mr. Kurzweil will be 97. With the help of vitamins and supplements, he plans to make it for a long time to come.

To some critics of the Singularity, it is an intellectually dubious attempt to replicate the organized religion belief system in the realm of software.

“They all want eternal life without the inconvenience of having to believe in God,” said Rodney Brooks, former director of the Computer Science and Artificial Intelligence Laboratory at the Massachusetts Institute of Technology.

The innovation fueling the current Singularity debate is the grand language model, the type of AI system that powers chatbots. Strike up a conversation with one of these LLMs and he can spit back answers quickly, coherently, and often with a fair amount of relief.

“When you ask a question, these models interpret what it means, determine what the answer is supposed to mean, and then translate that back into words — if that isn’t a definition of general intelligence, what is?” said Jerry Kaplan, a veteran AI entrepreneur and the author of “Artificial Intelligence: What Everyone Needs to Know.”

Mr Kaplan said he was skeptical of such highly heralded wonders as self-driving cars and cryptocurrency. He approached the latest AI boom with the same misgivings, but said he was won over.

“If this isn’t ‘the singularity,’ it certainly is a singularity: a transformative technological step that will greatly accelerate a whole range of art, science and human knowledge — and cause a number of problems,” he said.

Critics counter that even the impressive results of LLMs are a far cry from the vast, global intelligence long promised by the Singularity. Part of the problem with accurately separating hype from reality is that the engines that power this technology become obscured. OpenAI, which started as a non-profit organization using open source code, is now a for-profit venture that critics say is basically a black box. Google and Microsoft also offer limited visibility.

Much of the AI ​​research is done by the companies that have a lot to gain from the results. Researchers at Microsoft, which invested $13 billion in OpenAI, published an article in April concluding that a preliminary version of the latest OpenAI model “exhibits many features of intelligence”, including “abstraction, understanding, vision, coding” and “understanding of human motives and emotions”.

Rylan Schaeffer, a doctoral student in computer science at Stanford, said some AI researchers had painted an inaccurate picture of how these large language models display “emerging abilities” — unexplained abilities that weren’t evident in smaller versions.

Together with two Stanford colleagues, Brando Miranda and Sanmi Koyejo, Mr. Schaeffer’s question in one research paper published last month and concluded that emerging properties were “a mirage” caused by measurement errors. In fact, researchers see what they want to see.

In Washington, London and Brussels, lawmakers are discussing the opportunities and challenges of AI and are starting to talk about regulation. Mr. Altman is on a roadshow to deflect early criticism and promote OpenAI as the herder of the Singularity.

This includes an openness to regulation, but exactly what that would look like is vague. Silicon Valley generally believed that the government is too slow and stupid to oversee the rapid technological developments.

“There’s no one in government who can get it right,” said Eric Schmidt, former CEO of Google. said in an interview with “Meet the Press” last month, arguing for AI self-regulation. “But the industry can just about get it right.”

AI, like the Singularity, is already described as irreversible. “To stop it would require something like a global surveillance regime, and even that isn’t guaranteed to work,” said Mr. Altman and some of his colleagues. wrote last month. If Silicon Valley doesn’t make it, they added, others will.

Less discussed are the huge profits that can be made from uploading the world. Despite all the talk of AI as an unlimited wealth-generating machine, the people who get rich are pretty much those who are already rich.

Microsoft has seen its market cap increase by half a trillion dollars this year. Nvidia, a maker of chips that power AI systems, recently became one of the most valuable public U.S. companies when it said demand for those chips had skyrocketed.

“AI is the technology the world has always wanted,” Mr. Altman tweeted.

It’s certainly the technology that the tech world has always wanted, arriving at the absolute best possible time. Last year, Silicon Valley was reeling from layoffs and rising interest rates. Crypto, the previous boom, was mired in fraud and disappointment.

Follow the money, said Charles Stross, a co-author of the novel “The Rapture of the Nerds,” a comedic take on the Singularity, as well as the author of “Accelerando,” a more serious attempt at describing what life could soon be like.

“The real promise here is that companies will be able to replace many of their flawed, expensive, slow, human information processing subunits with pieces of software, speeding things up and reducing their overheads,” he said.

The Singularity has long been thought of as a cosmic event, literally breathtaking. And that is still allowed.

But it could manifest itself primarily—thanks in part to the bottom-line obsession of contemporary Silicon Valley—as a tool to reduce corporate America’s workforce. If you’re sprinting to add trillions to your market cap, heaven can wait.

Leave A Reply

Your email address will not be published.