News

This AI startup is supporting artificial voices and the people who need them

When David DuBoisa U.S. Army veteran and retired director of physical security for the U.S. Capitol Police, was diagnosed with ALS in 2022. His family has not heard him speak for more than 24 months.

Voice AI company Well said was able to use less than 40 minutes of old voicemail messages and videos to create a signature voice that matched DuBois’ authentic voice: a voice with emotion and humor, including his favorite line: “You’re Killin’ Me, Smalls.”

WellSaid says the company ultimately wants to focus on artificial intelligence to augment humanity, rather than replace it.

While working at Paul Allen’s Ai2 Institutea non-profit research center that explores the possibilities of artificial intelligence, Mixail Petrochuk developed an algorithm for a realistic AI voice — just three months after graduating from the University of Washington. It was there that Petrochuck met Matt Hocking, the future co-founder of WellSaid.

AI Atlas art badge tag AI Atlas art badge tag

Petrochuk, who identifies as autistic, was inspired to turn his challenges into opportunities. WellSaid was one way he wanted to make a positive impact on the world.

“I was born with more wires in my brain than most people. This means my brain is working overtime, thinking, processing and feeling,” Petrochuk said. “I often bring ideas that many people miss. I notice patterns in my work that I use to gain critical insights.”

WellSaid has been a strong advocate for AI accountability initiatives in the text-to-speech space. As reports on the risks of AI became more widely known, WellSaid had already initiated and implemented programs around revenue sharing, content moderation, and voice actor anonymity.

When WellSaid launched in 2018, it was originally designed to help educators create informative content. Today, WellSaid is used to educate and support millions of people, including voice actors, senior and disabled customers, and related organizations, such as Audible vision, which provides subtitles to the blind with a realistic, human voice.

WellSaid’s competitors and differentiators

From talking refrigerators to iPhones, our experts help you make the world a little less complicated.

WellSaid’s competitors include ElevenLabs and Murf AIbut WellSaid focuses on a strictly controlled training model that does not use public, open-source data.

Companies like ElevenLabs were founded on the desire to translate text and languages ​​in a way that seamlessly delivers realistic speech. ElevenLabs, similar to WellSaid, works for patients with degenerative diseases like ALS. And Murf AI has a Sourcing of voice data option where you get paid for submitting your voice recording.

But with open-source data, it’s not necessarily possible to have autonomy over your likeness, whether you pay $5 to try out an audio double or submit your voice recording.

After all, it’s only right to worry about the abuse of AI-generated voices. Remember the OpenAI-Scarlett Johansson case? Or the AI-fake robocall impersonations of President Joe Biden that the FCC ruled illegal?

With increasing media attention and lawsuits over AI-generated breaches of trust and security policies, WellSaid’s private data outsourcing is not only a smarter decision, but a necessary one.

WellSaid says it’s not harvesting millions of voices from across the internet, but rather focusing on its mission to use AI for good. In this scenario, it’s about high-quality output and consent to use the voices on its platform.

“All of our voices are from actors,” said Cook, CEO of WellSaid. “We’ve recorded them in professional settings. We’ve vetted them, we have their approval. We pay them, we pay them a royalty. We pay them for their time and training. We pay them an ongoing royalty.”

From talking refrigerators to iPhones, our experts help you make the world a little less complicated.

How ‘Ethical’ AI Can Support the Human Experience

When it comes to AI adoption, the question remains whether people want AI tools in their daily lives. According to a CNET survey published in September based on data collected by YouGov, 25% of respondents said they don’t find AI tools useful and don’t want them integrated into their phones.

Additionally, 34% are concerned about privacy when using AI on devices, while 45% say they don’t want to pay a subscription for AI tools.

Cook says comfort and trust play a big role in how people decide to interact with AI. He believes AI will eventually become woven into everyday life, allowing people to interact with the technology and make decisions about it based on personal experience.

Is there a world where ethical AI exists – comfortably – in the homes of ordinary people?

“If I look at it as a tool that helps us do things that we can’t do today, then I feel good about the role AI can play in preventing and spreading disease, or in delivering quality health care to people who are underserved or marginalized in society,” Cook said.

“I think in 30, 40, 50 years we’ll look back and say, ‘This was groundbreaking.’ This is a huge step toward making a lot of people’s lives better.”

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button