The news is by your side.

Hey, Alexa, what should students learn about AI?

0

Rohit Prasad, a senior Amazon executive, had an urgent message for ninth and tenth graders at Dearborn STEM Academy, a public school in Boston’s Roxbury neighborhood.

He had recently come to the school to attend an Amazon-sponsored artificial intelligence class in which students learn how to program simple tasks for Alexa, Amazon’s voice-activated virtual assistant. And he assured the Dearborn students that there would soon be millions of new jobs in AI

“We have to create the talent for the next generation,” Mr Prasad said lead scientist for Alexa, told the class. “So we educate about AI at the earliest grassroots level.”

A few miles away, Sally Cornbluththe president of the Massachusetts Institute of Technology, gave a more sobering message about AI to students from local schools who had gathered at the Kennedy Library complex in Boston for a workshop on AI risk and regulation.

“Because AI is such a powerful new technology, it really needs some rules to work well in society,” said Dr. Kornbluth. “We have to make sure it doesn’t do any damage.”

The events on the same day – one boosting work in artificial intelligence and the other warning against too hasty deployment of the technology – reflected the larger debate currently raging in the United States about the promise and potential danger of AI

Both student workshops were organized by an MIT initiative on “responsible AI”, which includes donors from Amazon, Google and Microsoft. And they underscored a question that has been puzzling school districts across the country this year: How should schools prepare students to navigate a world where, according to some prominent AI developers, the emergence of AI-powered tools seems almost inevitable?

Teaching AI in schools is not new. Courses such as computer science and civics now regularly include exercises on the societal impact of facial recognition and other automated systems.

But the push for AI education became more urgent this year after news about ChatGPT – a new chatbot that can create human homework essays and sometimes produce misinformation – began spreading in schools.

Now “AI literacy” is a new buzz phrase in education. Schools are looking for resources to teach it. Some universities, technology companies, and non-profit organizations are responding with ready-made curricula.

Classes are ramping up even as schools wrestle with a fundamental question: Should they teach students to program and use AI tools, providing training in technical skills that employers seek? Or should students learn to anticipate and mitigate AI damage?

Cynthia Breazeala professor at MIT who leads the university’s initiative Responsible AI for social empowerment and educationsaid her program was designed to help schools do both.

“We want students to be informed, responsible users and informed, responsible designers of these technologies,” said Dr. Breazeal, whose group organized the AI ​​workshops for schools. “We want to make them informed, responsible citizens about these rapid developments in AI and the many ways they impact our personal and professional lives.”

(Disclosure: I was recently a fellow in the Knight Science Journalism program at MIT)

Other education experts say schools should also encourage students to think about the wider ecosystems in which AI systems operate. That could include students researching the business models behind new technologies or exploring how AI tools leverage user data.

“When we engage students in learning about these new systems, we really need to think about the context around these new systems,” he said Jennifer Higgs, an assistant professor of learning and humanities at the University of California, Davis. But often, she noted, “that piece is still missing.”

The Boston workshops were part of a “Day of AI” event hosted by Dr. Breazeal, which attracted several thousand students worldwide. It provided a glimpse of the different approaches schools take to AI education.

At Dearborn STEM, Hilah Barbot, a senior product manager at Amazon Future Engineer, the company’s computer science education program, led a voice AI class for students. The lessons are developed by MIT with the Amazon program, which provides coding curricula and other programs for K-12 schools. The company provided more than $2 million in grants to MIT for the project.

First, Ms. Barbot explained some voice AI jargon. She taught students about “utterances,” the phrases consumers might say to prompt Alexa to respond.

Then students programmed simple tasks for Alexa, such as telling jokes. Jada Reed, a ninth grader, programmed Alexa to respond to questions about Japanese manga characters. “I think it’s really cool that you can train him to do different things,” she said.

Dr. Breazeal said it’s important that students have access to professional software tools from leading technology companies. “We give them future-proof skills and perspectives on how to work with AI to do things they care about,” she said.

Some Dearborn students, who had already built and programmed robots in school, said they appreciated learning how to code another technology: voice-activated help bots. Alexa uses a range of AI techniques, including automatic speech recognition.

At least a few students also said they had privacy and other concerns about AI-assisted tools.

Amazon records consumer conversations with its Echo speakers after a person says a “wake word” like “Alexa.” Unless users opt out, Amazon may use their interactions with Alexa to target them with ads or use their voice recordings to train its AI models. Last week, Amazon agreed to pay $25 million to settle federal charges that it kept children’s voice recordings indefinitely, in violation of the federal children’s online privacy law. The company said it disputed the allegations and denied breaking the law. The company noted that customers could view and delete their Alexa voice recordings.

But the hour-long Amazon-led workshop didn’t address the company’s data practices.

Dearborn STEM students regularly scrutinize technology. Several years ago, the school introduced a course in which students used AI tools to create deepfake videos – i.e. fake content – of themselves and investigate the consequences. And the students had thoughts about the virtual assistant they were learning how to program that morning.

“Did you know there’s a conspiracy theory that Alexa listens in on your conversations to show you ads?” asked a ninth grader named Eboni Maxwell.

“I’m not afraid to listen,” replied Laniya Sanders, another ninth grade student. Still, Ms. Sanders said she avoided using voice assistants because “I just want to do it myself.”

A few miles away at the Edward M. Kennedy Institute for the United States Senate, an education center with a full-size replica of the U.S. Senate chamber, were dozens of students from the Warren Prescott School in Charlestown, Mass. subject: AI policy and safety regulations.

Playing the role of senators from several states, the high school students took part in a mock hearing where they debated provisions for a hypothetical AI safety law.

Some students wanted to ban companies and police departments from using AI to target people based on data such as their race or ethnicity. Others wanted to require schools and hospitals to assess the fairness of AI systems before deploying them.

The exercise was not unknown to high school students. Nancy Arsenault, an English and civics teacher at Warren Prescott, said she often asked her students to think about how digital tools affect them and the people they care about.

“As much as students love technology, they are well aware that unlimited AI is not something they want,” she said. “They want to see boundaries.”

Leave A Reply

Your email address will not be published.