News

Elon Musk's Influence Over Trump Might Lead to Tighter AI Rules

Though Donald Trump’s reelection has spurred speculation, as I noted in depth last week, that the new administration might consider less regulation and a rejection of government-mandated safeguards around AI, those looking to rein in AI companies might find an ally in billionaire Elon Musk.

Photo shows a three-quarter profile of Musk in a suit and tie, with his lips slightly puckered, seemingly in concentration.

Elon Musk listens as US President-elect Donald Trump speaks during a meeting with House Republicans at the Hyatt Regency hotel in Washington, DC, on Nov. 13.

Allison Robbert/Getty Images

The South African immigrant, who’s become one of the world’s richest men through his work at PayPal, Tesla and other tech companies, is a prominent Trump booster poised to play an active role in the new administration. But despite his early backing of OpenAI and Google’s DeepMind and his AI investments at Tesla — self-driving cars, taxis and robots — and at his AI startup xAI, Musk has also been among those warning us about AI becoming too powerful and too fast in ways that could harm humanity. 

AI Atlas art badge tag

“His doom-laden views on AI run so deep that they were the reason he broke up his friendship with Google’s co-founder Larry Page. ‘The final straw was Larry calling me a “species-ist” for being pro-human consciousness instead of machine consciousness,’ Musk told CNBC’s David Faber last year,” Bloomberg’s Parmy Olson noted

“Musk might be a thin-skinned narcissist, but he’s also a purist for whom money is a means by which to achieve grander goals, and he’ll put ideology and ego before his financial interests,” Olson added. “If Musk reaches the unique position of shaping national rules around AI, he’ll likely want to use that perch to make good on his ideology. That would do more for his ego than loosening the regulatory rules on AI, which would help his competitors just as much as it would help xAI and Tesla.”

MIT AI professor Max Tegmark also believes that Musk’s influence over Trump on tech industry issues might lead to “tougher safety standards on AI,” including preventing the development of an artificial general intelligence, a system that matches or exceeds human intelligence. “If Elon manages to get Trump’s ear on AI issues we’re more likely to get some form of safety standards, something that prevents AGI,” Tegmark told The Guardian. “He might help Trump understand that an AGI race is a suicide race.”

Still, doomers and others worried about the future of AI shouldn’t place all their bets on Musk. Added Bloomberg’s Olson: “Musk’s worries about AI doom don’t make him a safe pair of hands for AI policy. Just look at what has happened to Twitter under his watch. Poisonous rhetoric against immigrants and people of color has proliferated on the platform … while conspiracy theories have little trouble going viral, often thanks to posts by Musk himself.”

As we say in journalism: Stay tuned, as this is a developing story.

Here are the other doings in AI worth your attention.

OpenAI’s progress may be stalled by a lack of new training data

OpenAI has been signing licensing agreements with some publishers, and fending off lawsuits from others, as it tries to meet the insatiable appetite for new training data — the billions of bits of information, including copyrighted stories created by news publishers and others, that its AI engines, or large language models, are fed so they can answer all the questions we ask of chatbots like ChatGPT.

So I took note of a report that said the company is seeing a slower rate of improvement in the new AI model it’s developing, called Orion. The team testing Orion has revealed that the model’s improvement rate “is nowhere near as high as the rate of improvement we saw between the launch of GPT-3 and GPT-4,” according to Tool Report. The reason: “a lack of unused, high-quality, real-life training data.”

OpenAI didn’t respond to my request for comment about Orion and any issues with training data. 

Training data is a big deal, given that OpenAI, Google and others creating LLMs have already reportedly scraped all the content there is to scrape off the internet to feed their AI machines. Now that content owners, like The New York Times and numerous high-profile authors, are wise to that (with a recent Ziff Davis report noting just how much AI companies rely on high-quality content), OpenAI has looked to licensing deals to ensure a steady supply of the new content being created every day. (And it’s why Microsoft and VC firm A16z argued this month that copyright law shouldn’t get in the way of for-profit AI startups, saying the startups need “free and accessible” access to new works and knowledge.)

Tool Report’s story suggests OpenAI may be doing experiments in training Orion using “synthetic data — which has been generated using AI models and imitates patterns seen in real-world data — alongside real-world data to introduce new layers of variability and nuance, so the model can improve its ability to process and handle more complex, real-world scenarios. While synthetic data could be a way to overcome the data scarcity issue, it’s a fairly new concept that is susceptible to bias and inaccuracies.”

Add this to the list of developing AI stories. 

‘Last’ Beatles song, created with AI help, gets a Grammy nod 

What Paul McCartney has called the “last” original Beatles song, released after AI technology was able to help recover vocals John Lennon recorded before his death, is now up for a Grammy in the best rock performance category. It’s the first Grammy nomination for the Fab Four — Lennon, McCartney, George Harrison and Ringo Starr — in 27 years (the band has garnered nearly 25 nominations over the years).

Signup notice for AI Atlas newsletter

Now and Then, based on a demo Lennon recorded before he died in 1980, was called the last Beatles tune by McCartney when he and Starr announced how the track was being assembled, ahead of its release in November 2023. (The official music video has nearly 61 million views.) With the help of AI technology developed by filmmaker Peter Jackson, the surviving members of the band were able to uncouple Lennon’s vocal track from the demo of him singing and playing the piano. (The recording was given to them by Lennon’s widow, Yoko Ono.) McCartney and Starr added new instrumentation and included guitar recordings from the ’90s made by Harrison, who died in 2001.

The nomination comes at a time when artists are concerned with how AI might undercut the authenticity of the music, art, films and other creative projects they produce. “Although AI was used in the production of Now and Then, it fits within the guidelines for the Grammys that states ‘only human creators are eligible’ and that work which features ‘elements of AI material’ is allowed in certain categories,” CNET noted (in case you were wondering). 

That’s because unlike other AI-generated music — notably the 2023 viral song Heart on My Sleeve by the musician Ghostwriter977, which used AI to sound like the artists Drake and The Weeknd, without their knowledge or permission — Now and Then doesn’t mimic or fake Lennon’s voice. AI was just used to clean up a track he recorded.

When it comes to poetry, some people prefer the AI stuff

Heart on My Sleeve, which was rejected for Grammy consideration, demonstrated just how good AI is becoming at copying the work of human artists and fooling fans

Which brings me to a Nov. 14 study, published by Nature.com, that found humans weren’t able to tell the difference between poetry authored by people and an AI — and in some cases rated the AI-generated poetry “more favorably.” To be fair, the study asked “non-expert readers,” as opposed to, say, die-hard fans of Emily Dickinson or William Shakespeare, to rate poems by noted authors. They were told the poems were written either by a human or an AI, or they weren’t told anything at all about who (or what) wrote them. 

The purpose was to understand, in part, human biases toward AI-generated content, the researchers explained.

“AI-generated images have become indistinguishable from reality. AI-generated paintings are judged to be human-created artworks at higher rates than actual human-created paintings; AI-generated faces are judged to be real human faces at higher rates than actual photos of human faces, and AI-generated humor is just as funny as human-generated jokes. Despite this, studies have consistently found a bias against AI-generated artwork; when told that an artwork is AI-generated, participants rate the work as lower quality.”

Why did the AI-generated poems rate more “favorably in qualities such as rhythm and beauty?” The answer: Human poets may be too deep for nonexpert poetry readers, who found the “simplicity” of AI-generated poems easier to understand and misinterpreted the “complexity of human poems as incoherence generated by AI.” 

“Non-expert poetry readers prefer the more accessible AI-generated poetry,” which communicates “emotions, ideas, and themes in more direct and easy-to-understand language,” the researchers said. But the readers “expect AI-generated poetry to be worse.”

And that leads us to something the researchers call the “more human than human” phenomenon: “People are more likely to believe that AI-generated poems are human-written because they prefer the AI poems and because they assume that they are more likely to like human-written than AI-generated poems.”

There’s more to it than that, which is why I encourage you to read through the report. Or you can just check out two of my favorite poems, both created by humans. The first is “Stopping by Woods on a Snowy Evening,” by Robert Frost. And the second, “Cow Poetry,” is a Far Side cartoon by Gary Larson.  

Media companies continue experimenting with AI tools

Though there are valid concerns that some publishers might replace reporters and editors at their media brands with AI tools (as the companies face pressure to cut costs — and employees), newsrooms should be experimenting with gen AI tech to figure out ways to enhance the work their reporters do and provide new services to their readers, according to numerous media studies, including this April report from the Associated Press.

That’s why it should be no surprise that The Washington Post, which has been among the most aggressive in introducing AI features, has a new tool called Ask the Post, which “delivers AI-generated answers from published Washington Post reporting” going back to 2016. What kind of questions? Examples include, “How much did Taylor Swift make from the Eras Tour?” and, “Why is a nuclear-based clock so important?”

The Post acknowledges that the tool is an experiment and tells readers, on its FAQ page, that they’ll need to verify any of the answers provided, since even though the answers are summarized from the news org’s own stories, no gen AI engine can “entirely eliminate or prevent the risk of mistakes or ‘hallucination,’ a technical term that refers to the AI misinterpreting the underlying texts upon which it is basing responses.”

The Wall Street Journal, and its Dow Jones Newswire, say they are also experimenting with AI. “We use AI and machine learning in complex data-driven investigations, to summarize articles, to translate some content into local languages and to power website features such as audio versions of our articles.

One WSJ use case is the Key Points summary that appears at the top of stories. Taneth Evans, head of digital at the WSJ, told The Verge that the news org is “currently running a series of A/B tests to understand our users’ needs with regards to summarization.”

If you’re interested in how AI may be reshaping journalism — the opportunities and challenges — the Columbia Journalism Review published an extensive report in February that’s worth a read. 

Also worth knowing…

OpenAI’s ChatGPT and Google’s NotebookLM are rapidly gaining traction among users, based on the number of visits each attracted in November according to market research firm Similarweb. Traffic to ChatGPT rose 17.2% compared with the prior month, with 3.7 billion visits. Meanwhile, NotebookLM, which lets you ask questions about the curated set of notes, reports, websites, documents, links, videos and other content you add into Google’s AI-enhanced note-taking app, saw things triple to 31.5 million visitors in November. 

If you’re thinking of traveling, or moving to another country for any reason, and find yourself in need of learning new languages, Memrise, a language learning app, has added new AI Buddies designed to serve as chatbot tutors that can help. The seven AI Buddies focus on a different aspect of learning, including simulating conversations with native speakers, and are available for 17 languages, including Danish, French, German, Italian, Spanish, Swedish and Turkish. You can try the demo here.

Though it may not surprise you that 45% of US consumers say they expect to use AI services in the next five years, an October survey by Morning Consult that was commissioned by industry trade group CCIA found that 30% of consumers aged 65 to 70 say they’ve used AI services — with 7% using them on at least a weekly basis. “Optimism about the potential benefits of AI is more widespread than concern,” said Trevor Wagener, CCIA’s chief economist. The complete survey is here

Even with AI adoption on the rise, researchers have noted a widening gender gap between who’s using the tools. After studying 3 million job profiles and 12,000 workers around the world, recruitment service Randstad found that 71% of AI-skilled workers are men and just 29% are women — a 42 percentage point gender gap. The study can be found here

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button