You can now use the Claude 3.5 Haiku AI model on both web and mobile apps
Anthropic has quietly released the Claude 3.5 Haiku artificial intelligence (AI) model to users. On Thursday, several Internet users began posting about the model’s availability in Claude’s web interface and mobile apps. Anthropic stated that the new generation Haiku is the company’s fastest major language model developed. Furthermore, the base model also outperforms the Claude 3 Opus, the most capable model of the previous generation, in several benchmarks. It is notable that all Claude users get access to the Claude 3.5 Haiku, regardless of their subscription.
Anthropic releases Claude 3.5 Haiku
Although the AI company has not made any announcements about the release of the new Haiku model, several users on X (formerly known as Twitter) posted about its availability on both the website and mobile apps. Gadgets 360 employees were also able to independently verify that Claude 3.5 Haiku is now the default language model on the chatbot. Additionally, this is the only model available to those on Claude’s free tier.
Anthropic first announced the Claude 3.5 family of AI models in October, when the first iteration of the 3.5 Sonnet was released. The company emphasized at the time that the 3.5 Haiku is the fastest model. Some of the upgrades in the new generations include lower latency (better response time), improved instruction tracking, and accurate tool usage.
For enterprises, the AI company highlighted that Claude 3.5 Haiku excels at user-centric products, specialized sub-agent tasks, and generating personalized experiences from large amounts of data.
In terms of performance, the new Haiku model scored 40.6 percent on the Software Engineering (SWE) benchmark, outperforming the first iteration of 3.5 Sonnet and OpenAI’s GPT-4o. It also outperforms GPT-4o mini on the HumanEval and Graduate-Level Google-Proof Q&A (GPQA) benchmarks.
Most notably Anthropic earlier this month optimized the Claude 3.5 Haiku for the AWS Trainium2 AI chipset and added support for latency-optimized inference in Amazon Bedrock. The company has yet to add support for Google Cloud’s Vertex AI. The new AI model can only generate text, but accepts both text and images as input.