The news is by your side.

The Optimist’s Guide to Artificial Intelligence and Work

0

It’s easy to fear machines are taking over: Companies like IBM and British telecommunications company BT have cited artificial intelligence as a reason to cut workforces, and new tools like ChatGPT and DALL-E are making it possible for anyone to take advantage of the extraordinary possibilities of artificial intelligence for itself. A recent study researchers from OpenAI (the start-up behind ChatGPT) and the University of Pennsylvania found that for about 80 percent of jobs, at least 10 percent of the tasks could be automated using the technology behind such tools.

“Everyone I talk to, super smart people, doctors, lawyers, CEOs, other economists, your brain first thinks, ‘Oh, how can generative AI replace this thing that humans do?'” says Erik Brynjolfsson, a professor at the Stanford Institute for Human-Centered AI.

But that’s not the only option, he said. “The other thing I wish people would do more of is think about what new things can be done now that have never been done before. That is of course a much more difficult question.” It’s also, he added, “where the most value is.”

How tech makers design, business leaders use and policymakers regulate AI tools will determine how generative AI ultimately affects jobs, say Brynjolfsson and other economists. And not all choices are necessarily bleak for employees.

AI can complement human labor rather than replace it. For example, many companies use AI to automate call centers. But a Fortune 500 company that provides business software has instead used a tool like ChatGPT to give its employees live suggestions on how to respond to customers. Brynjolfsson and his co-authors of a study compared call center employees who used the tool with those who didn’t. They found that the tool increased productivity by an average of 14 percent, with most of the gains coming from low-skilled workers. Customer sentiment was also higher and staff turnover lower in the group using the tool.

David Autor, an economics professor at the Massachusetts Institute of Technology, said AI could potentially be used to deliver “expertise on tap” in jobs such as healthcare, software development, law and skilled repair. “That presents an opportunity to empower more employees to do valuable work that depends on some of that expertise,” he said.

Employees can focus on different tasks. As ATMs automated the tasks of dispensing cash and taking deposits, the number of bank clerks increased, according to an analysis by James Bessen, a researcher at Boston University School of Law. This was partly because bank branches needed fewer employees, but became cheaper to open – and banks opened more. But banks also changed the job description. After ATMs, storytellers focused less on counting cash and more on building relationships with customers, to whom they sold products such as credit cards. Few tasks can be fully automated by generative AI. But by using an AI tool for some tasks, employees can free up more time to expand their work to tasks that cannot be automated.

New technology can lead to new jobs. Agriculture working almost 42 percent of the working population in 1900, but due to automation and technological advances that was only 2 percent in 2000. The massive reduction in agricultural jobs did not lead to widespread unemployment. Instead, technology created many new jobs. A farmer in the early 20th century could not imagine computer coding, genetic engineering or trucking. In an analysis that used census data, Autor and his co-authors found that 60 percent of today’s occupational specialties didn’t exist 80 years ago.

Of course, there is no guarantee that workers will be qualified for new jobs, or that they will be good jobs. And none of this just happens, says Daron Acemoglu, an MIT economics professor and co-author of “Power and Progress: Our 1,000-Year Struggle Over Technology & Prosperity.”

“If we make the right choices, we will create new types of jobs, which is crucial for wage growth and also to really reap the productivity benefits,” said Acemoglu. “But if we don’t make the right choices, this can happen much less.” —Sarah Kessler

Martha’s Model Behavior. Lifestyle entrepreneur Martha Stewart became the oldest person to grace the cover of Sports Illustrated’s swimsuit issue this week. Stewart, 81, told The Times it was a “big challenge” to have the confidence to pose, but that two months of Pilates had helped. She’s not the first over-60s to receive the award: Maye Musk, Elon Musk’s mother, graced the cover last year at age 74.

TikTok block. Montana became the first state to ban the Chinese short video app, preventing app stores from offering TikTok within its borders from January 1. The ban is expected to be difficult to enforce and TikTok users in the state have sued the government, saying the measure violates their First Amendment rights and offers a glimpse of the potential backlash if the federal government tries to block TikTok across the country.

Banker debt game. Greg Becker, the ex-CEO of Silicon Valley Bank, accused of “rumors and misconceptions” for a run on deposits in its first public remarks since the lender collapsed in March. Becker and former top executives of the failed Signature Bank also told a Senate committee investigating their role in the bank’s collapse that they wouldn’t return millions of dollars in wages.

When OpenAI CEO Sam Altman testified in Congress this week calling for regulation of generative artificial intelligence, some lawmakers hailed it as a “historic” move. In fact, asking lawmakers for new rules is a move straight out of the tech industry’s playbook. Silicon Valley’s most powerful executives have long taken to Washington to demonstrate their commitment to rules in an effort to shape them while unleashing some of the world’s most powerful and transformative technologies without pause.

One reason: A federal rule is much easier to administer than different rules in different states, Bruce Mehlman, a political consultant and former technology policy officer in the Bush administration, told DealBook. Clearer regulation also gives investors more confidence in an industry, he added.

The strategy sounds sensible, but if history is a useful guide, reality can be messier than rhetoric:

  • In December 2021, Sam Bankman-Fried, founder of failed crypto exchange FTX, was one of six executives to testify on digital assets in the House calling for regulatory clarity. His company had just submitted a proposal for a “unified common regime,” he told lawmakers. A year later, Bankman-Fried’s businesses were bankrupt and he faced criminal fraud and allegations of illegal campaign contributions.

  • In 2019, Facebook founder Mark Zuckerberg wrote an op-ed in The Washington Post, “The internet needs new rules”, based on flaws in content moderation, election integrity, privacy and data governance at the company. Two years later, independent researchers found that there was more misinformation on the platform than in 2016, even though the company had spent billions to eradicate it.

  • In 2018, Apple chief Tim Cook said he was generally averse to regulation but supported stricter data protection rules, saying, “It’s time for a group of people to think about what can be done.” But to maintain its business in China, one of its largest markets, Apple has largely relinquished control of customer data to the government as part of its requirements to operate there.


Platforms such as TikTok, Facebook, Instagram and Twitter use algorithms to identify and moderate problematic content. To avert these digital moderators and allow free exchange on taboo topics, a language code has been developed. It’s called algospeak.

“There’s a linguistic arms race raging online — and it’s not clear who’s winning,” writes Roger J. Kreuz, professor of psychology at the University of Memphis. Posts about sensitive topics such as politics, sex, or suicide can be flagged and deleted by algorithms, leading to the use of creative spelling errors and stand-ins, such as “seggs” and “mascara” for sex, “not alive” for dead, and ‘cornucopia’. for homophobia. There’s a history of responding to bans with code, notes Kruz, such as 19th-century Cockney rhyming slang in England or “Aesopian,” an allegorical language used to circumvent censorship in Tsarist Russia.

Algorithms aren’t the only ones that don’t pick up on the code. The euphemisms and spelling errors are peculiar ubiquitous in marginalized communities. But the hidden language sometimes eludes people, which can lead to potentially fraught miscommunication online. In February, celebrity Julia Fox found herself in an awkward conversation with a sexual assault victim afterward misunderstand a post about “mascara” and had to publicly apologize for responding inappropriately to what she thought was a discussion about makeup.

Thank you for reading!

We want your feedback. Email your thoughts and suggestions to dealbook@nytimes.com.

Leave A Reply

Your email address will not be published.