Amazon is reportedly struggling to make Alexa smarter
Amazon introduced its voice-activated virtual assistant Alexa to the world in November 2014. The technology’s name was said to be inspired by the Star Trek computer system aboard the Starship Empire and underscored CEO Jeff Bezos’ ambition to create a conversational and intelligent assistant. However, a report claims that despite a tech demo last year showing off a contextually aware Alexa, it’s still a long way from integrating with artificial intelligence (AI) to be smarter. A former Amazon employee who worked on Alexa AI has also highlighted that knowledge silos and fragmented organizational structures are detrimental to Alexa’s progress.
Former Amazon Employee Highlights Problems With Improving Alexa
In the long run after On X (formerly known as Twitter), Mihail Eric, who worked as a Senior Machine Learning Scientist at Amazon on Alexa AI between 2019 and 2021, shared his experiences working at the company and the challenges he faced. He also explained why Alexa was a project doomed to fail.
Eric highlighted the “poor engineering process” at the company, saying that the company had a very fragmented organizational structure, which meant that data had to be obtained to provide training for large language models (LLM). “It took weeks to access internal data for analysis or experimentation. Data was poorly annotated. Documentation was either non-existent or outdated,” he added.
He also said that different teams were working on identical problems, which created an atmosphere of internal competition that was not productive. He also found that managers were not interested in collaborating on projects that did not reward them.
In the post, Eric shared several examples where organizational structure and policies got in the way of developing “an Amazon ChatGPT (long before ChatGPT was released).”
Amazon employees reportedly highlight Alexa’s problems
Fortune published a long report where it cited more than a dozen anonymous Amazon employees to highlight the difficulties the company faces in integrating AI capabilities into its virtual assistant. One particular issue raised was that Alexa’s current capabilities make it harder to integrate a modern tech stack.
Alexa is reportedly trained to respond in “utterances,” which essentially means it is designed to respond to a user command and announce that it has carried out the requested command (or that it cannot understand the user). As a result, Alexa is not programmed for back-and-forth conversation.
The publication cited a former Amazon machine learning scientist, who explained that the model also led to Amazon customers learning a more efficient way to interact with the virtual assistant, namely by providing a short prompt for the action. This created another problem. Despite hundreds of millions of users actively talking to Alexa every day, the data is suitable for utterance training, not conversational training. This has reportedly created a major data gap in the organization.
The report further claims that Alexa is a cost center for Amazon and that the company is losing billions every year because the technology cannot be monetized yet. Meanwhile, Amazon Web Services (AWS) has an AI assistant called Amazon Q that is offered to specific enterprises as an add-on and generates money. Over the years, the Amazon Q division has seen increased investment and even integration with Anthropic’s Claude AI model. However, Alexa’s AI team was denied access to Claude due to data privacy concerns.
When Fortune reached out to Amazon, a spokesperson reportedly denied the claims, saying that these employee-provided details were dated and did not reflect the current state of the division’s LLM development. While that may be true, the more conversational Alexa seen at last year’s tech demo has yet to be released to the public by the tech giant.