News

New Trump era could give AI companies a free hand

With the re-election of former President Donald Trump, Silicon Valley is now speculating about how the new administration will adopt business-friendly policies towards AI (and cryptocurrency companies). After all, a relaxation of regulations was one of the main reasons why some Silicon Valley venture capitalists, investors and executives who have poured billions into generative AI decided to support the Republican candidate and a party they previously opposed, as happened. widely reported in July.

AI Atlas art badge tag AI Atlas art badge

Axios say American voters have decided that ‘AI will grow up in a tolerant household where anything goes, rather than under the guidance of stricter parents. Trump’s reputation may be that of a strongman, and his MAGA conservatism embraces tough talk, but last night’s Republican victory makes it much more likely that AI will run wild as it evolves.”

Trump’s allies have drawn up plans to “Make America first in AI,” in part by rolling back “burdensome regulations” and safeguards offered by President Joe Biden, and endorsed by Vice President Kamala Harris, as part of Biden’s 2023 AI Executive Order. And of course, fewer restrictions around AI development could benefit Trump’s supporters, including one of his biggest donors, Elon Musk and his xAI company.

As for Trump, he has “repeatedly said he plans to dismantle Biden’s AI policy framework on ‘day one’ and has joined the kingmakers who have sharply criticized all but the lightest regulations.” notes TechCrunch. “The AI ​​EO has achieved a lot. Over the past year, the Commerce Department established the U.S. AI Safety Institute (AISI), a body that examines risks in AI systems, including those with defense applications,” TechCrunch reports. “Critics associated with Trump argue that the EO’s reporting requirements are onerous and effectively force companies to reveal their trade secrets.”

Bloomberg news too reports that Trump will ‘nullify’ Biden’s AI executive order, while Fast Company predicts the returning president “will likely push the government away from overseeing the AI ​​industry.” …Newly elected Vice President JD Vance, who served as VC before entering politics and could potentially lead AI policy in the new Trump administration, has argued that AI regulation would only lead to AI dominance market by a handful of major players, leaving startups out.”

Meanwhile, The New York Times writes that while Trump’s first term was marked by “chaos and uncertainty” for the tech industry, most executives will “kiss the ring,” as Amazon founder Jeff Bezos has already done, and “quietly tolerate – if not enthusiastically support” his second-term agenda for fear of presidential retaliation and for fear of losing out on government contracts.

I don’t know what the new Trump administration will do, but I will remind everyone that even AI boosters, including OpenAI CEO Sam Altmannot sure how to balance it boom and doom scenarios presented by powerful and rapidly evolving generation AI and AI technology.

Here are the other AI activities worth your attention.

What’s in your AI? Ziff Davis has some answers for publishers

If the new administration actually decides to take a laissez-faire attitude toward AI regulation, we may never know what data and information is being fed as training data into the major language models that power today’s most powerful chatbots, including OpenAI’s ChatGPT, Google’s Gemini, Claude from Anthropic and Lama from Meta. Although California passed a state law requiring AI companies to reveal what’s in their training data, Biden’s AI executive order had no permit rules that require it.

Why do we want to know that? Well, because if these LLMs are going to become the repository of all human information and increasingly become the places we go to find answers to our questions (as they turn into AI-powered search engines and not just autocomplete on steroids), Then we need to know what data is driving these answers so we can trust that the information is not outright wrong or biased.

Sign up message for the AI ​​Atlas newsletter Sign up message for the AI ​​Atlas newsletter

Copyright holders also want to know because they say so AI companies are scraping by taking their content from the Internet without permission or compensation, which they say means that the LLM training data consists largely of illegal content. That’s why some publishers sue AI companies, including OpenAI and Perplexity. The AI ​​companies counter that content is fair game because it is “publicly available” — prompting lawyers to note that copyrighted content is still protected even if it is “publicly available.”

What do we know about what’s in all that training data? This is evident from a new study by Ziff Davis reported According to Axios, “Leading AI companies like OpenAI, Google, and Meta rely on content from premium publishers to train their large language models (LLMs) more than they publicly admit.”

OpenAI, Google, Meta and Anthropic did not immediately respond to CNET’s requests for comment on the study, which can be found here. (Full disclosure: CNET is owned by Ziff Davis.)

If you’re interested in why training data and copyright issues are things to be aware of (e.g. your kids want to be writers, artists, filmmakers, or researchers) TechCrunch also has a analysis that’s worth reading. This is a proposal from Microsoft and venture capital firm A16z about why copyright law is holding back AI startups. Note: These for-profit startups—because they want to make money—don’t want to pay for copyrighted content that they believe should be “free and accessible.”

Meta allows its AI engine to be used for US military purposes

In addition to researchers and entrepreneurs, Meta said last week that its open-source Llama AI engines are also “being made available to U.S. government agencies, including those working on defense and national security applications, and private sector partners supporting their work.”

“Responsible use of open source AI models advances global security and helps position the US in the global race for AI leadership,” the company said. said in a blog post.

The New York Times notes that this decision is an exception to Meta’s”acceptable use policy.” That policy prohibits the use of Llama AI for “military, warfare, nuclear industries” and other purposes, including criminal activity and intimidation, abuse or bullying of people or groups of people.

Meta said its partners are already using Llama to “support specific national security team missions, such as planning operations and identifying adversaries’ vulnerabilities.” It also said they use Llama to help aircraft technicians “diagnose problems faster and more accurately” by compiling maintenance documents.

Also worth knowing…

In what can only be described as a cost-cutting measure gone seriously wrong, a state-funded radio station in Poland fired its on-air journalistsreplaced them with three AI-generated “presenters” and then had to withdraw from the entire venture after the AI ​​touted an impressive interview: a conversation with Polish poet Wislawa Szymborska, the winner of the 1996 Nobel Prize in Literature. The kink in cable for OFF Radio Krakow, such as The New York Times reported: Szymborska died in 2012. See the details in the Times.

One of the predictions for technology for 2025 is market research firm IDC predicts a change in the way AI apps work. “The rise of copilots through the 2022 GenAI boom is quickly giving way to AI agents – fully automated software components capable of using knowledge and skills to assess situations and take action” with limited or no human intervention.

If you’re interested in tinkering with Apple Intelligence on your iPhone 15 Pro or iPhone 16, CNET recommends trying out three features: Summaries, Siri, and the Cleanup feature in iPhotos.

Elon Musk believes that humanoid robots, such as the Optimus robots created by Tesla, will “outnumber people within 20 years.” Musk told the audience at the Future Investment Initiative that the cost of a “robot that can do everything” would be between $20,000 and $25,000. reported. Based on Musk’s calculations, this would mean there would be 10 billion humanoid robots on the planet. Musk said the AI ​​is powered Optimus (aka Tesla Bot) would come into effect limited production in 2025 (I don’t see any mention of whether it will adhere to Isaac Asimov’s three laws of robotics).

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button