OpenAI Co-Founder Raises $1 Billion for Security-Focused AI Startup SSI
Safe Superintelligence (SSI), recently co-founded by Ilya Sutskever, former chief scientist of OpenAI, has raised $1 billion (about Rs 8,398 crore) to develop safe artificial intelligence (AI) systems that far exceed human capabilities, company executives told Reuters.
SSI, which currently has 10 employees, plans to use the funds to acquire computing power and hire top talent. It will focus on building a small, highly reliable team of researchers and engineers spread between Palo Alto, California, and Tel Aviv, Israel.
The company declined to share its valuation, but sources close to the matter said it was valued at $5 billion (roughly Rs. 41,993 crore). The funding underscores how some investors are still willing to place outsized bets on exceptional talent focused on basic AI research. That’s despite a general decline in interest in funding such companies, which can run at a loss for some time, and which has seen several startup founders leave their posts for tech giants.
Investors included top venture capital firms Andreessen Horowitz, Sequoia Capital, DST Global and SV Angel. NFDG, an investment partnership run by Nat Friedman and SSI Chief Executive Daniel Gross, also participated.
“It’s important that we are surrounded by investors who understand, respect and support our mission, which is to go straight to safe superintelligence and to spend a number of years doing R&D on our product before we bring it to market,” Gross said in an interview.
AI safety, which refers to preventing AI from causing harm, is a hot topic due to fears that malicious AIs could conflict with humanity’s interests or even lead to human extinction.
A California bill that would impose safety regulations on companies has divided the industry, opposed by companies including OpenAI and Google, and supported by Anthropic and Elon Musk’s xAI.
Sutskever, 37, is one of the most influential technologists in AI. He founded SSI in June with Gross, who previously led AI initiatives at Apple, and Daniel Levy, a former OpenAI researcher. Sutskever is chief scientist and Levy is chief scientist, while Gross is responsible for computing and fundraising.
New mountain
Sutskever said his new venture made sense because he had “identified a mountain that was slightly different from what I was working on.”
Last year, he served on the board of directors of the nonprofit parent company OpenAI, which decided to fire OpenAI CEO Sam Altman over a “communications issue.”
Within days, he reversed his decision and, along with nearly all of OpenAI’s employees, signed a letter demanding that Altman return and that the board resign. But the turn of events diminished his role at OpenAI. He was ousted from the board and left the company in May.
After Sutskever’s departure, the company dismantled its Superalignment team, which ensured AI remained aligned with human values, in preparation for the day when AI surpasses human intelligence.
Unlike OpenAI’s unorthodox corporate structure, which was implemented for AI safety reasons but allowed Altman’s ouster, SSI has a regular, for-profit structure.
SSI is currently focusing primarily on hiring people who fit within the company culture.
Gross said they spend hours screening candidates with “good character” and look for people with exceptional abilities rather than placing too much emphasis on qualifications and experience in the field.
“What we love is that we find people who are interested in our work, but not in the scene or the hype,” he added.
SSI says it plans to partner with cloud providers and chip companies to fund its computing needs, but hasn’t yet decided which companies it will work with. AI startups often partner with companies like Microsoft and Nvidia to address their infrastructure needs.
Sutskever was an early proponent of scaling, a hypothesis that AI models would perform better with large amounts of computing power. The idea and its implementation led to a wave of AI investment in chips, data centers, and energy, laying the groundwork for generative AI developments like ChatGPT.
Sutskever said he will approach the scale-up differently than his former employer, but he declined to provide details.
“Everyone just says, ‘scaling hypothesis.’ Everyone forgets to ask, ‘What are we scaling?’” he said.
“Some people can work really long hours and they just go down the same path faster. That’s not really our style. But when you do something different, it becomes possible to do something special.”
© Thomson Reuters 2024
(This story has not been edited by NDTV staff and is auto-generated from a syndicated feed.)