Tech & Gadgets

‘Nobody knows what makes humans so much more efficient’: Tiny language models based on Homo sapiens could help explain how we learn and improve the efficiency of AI – for better or for worse

Tech companies are shifting their focus from building the largest language models (LLMs) to developing smaller language models (SLMs) that match or even surpass them.

Meta’s Llama 3 (400 billion parameters), OpenAI’s GPT-3.5 (175 billion parameters), and GPT-4 (estimated 1.8 trillion parameters) are notoriously larger models, while Microsoft’s Phi-3 family ranges from 3.8 billion to 14 billion parameters, and Apple Intelligence has ‘only’ around 3 billion parameters.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button