Thursday, September 19, 2024
Home Tech & Gadgets ‘Nobody knows what makes humans so much more efficient’: Tiny language models based on Homo sapiens could help explain how we learn and improve the efficiency of AI – for better or for worse

‘Nobody knows what makes humans so much more efficient’: Tiny language models based on Homo sapiens could help explain how we learn and improve the efficiency of AI – for better or for worse

by Jeffrey Beilley
0 comments

Tech companies are shifting their focus from building the largest language models (LLMs) to developing smaller language models (SLMs) that match or even surpass them.

Meta’s Llama 3 (400 billion parameters), OpenAI’s GPT-3.5 (175 billion parameters), and GPT-4 (estimated 1.8 trillion parameters) are notoriously larger models, while Microsoft’s Phi-3 family ranges from 3.8 billion to 14 billion parameters, and Apple Intelligence has ‘only’ around 3 billion parameters.

You may also like

Leave a Comment

Soledad is the Best Newspaper and Magazine WordPress Theme with tons of options and demos ready to import. This theme is perfect for blogs and excellent for online stores, news, magazine or review sites.

Buy Soledad now!

Edtior's Picks

Latest Articles

u00a92022u00a0Soledad.u00a0All Right Reserved. Designed and Developed byu00a0Penci Design.