News

ChatGPT Glossary: ​​45 AI Terms Everyone Should Know

When ChatGPT launched in late 2022, it completely changed the way people interacted with technology. Suddenly, online searches became agentive, meaning you could have a natural language conversation with a chatbot and it would respond with new answers, just like a human would. It was so transformative that Google, Meta, Microsoft, and Apple quickly began integrating AI into their product suites.

But that aspect of AI chatbots is just one part of the AI ​​landscape. Certainly, having ChatGPT helps you with your homework or have Midjourney made fascinating images of mechs based on the country of origin is cool, but the potential of generative AI could completely transform economies. That might be worth it $4.4 trillion annually to the global economyaccording to the McKinsey Global Institute. That’s why you can expect to hear more and more about artificial intelligence.

ai-atlas-tag.png ai-atlas-tag.png

It shows up in a dizzying array of products — a short, short list includes Google’s Gemini, Microsoft’s Copilot, Anthropic’s Claude, the Perplexity AI search tool, and gadgets from Humane and Rabbit. You can read our reviews and hands-on evaluations of these and other products, along with news, explainers and how-to posts, on our new AI Atlas hub.

As people become more accustomed to a world intertwined with AI, new terms are popping up everywhere. So whether you want to sound smart over drinks or impress in a job interview, here are some key AI terms you need to know.

This glossary is updated regularly.

artificial general intelligence, or AGI: A concept that points to a more advanced version of AI than we know today, one that can perform tasks much better than humans while also learning and developing its own skills.

AI Ethics: Principles aimed at preventing AI from being harmful to people. This can be achieved, for example, by determining how AI systems should collect data or how they should deal with biases.

AI safety: An interdisciplinary field concerned with the long-term implications of AI and how AI could suddenly evolve into a superintelligence that could be hostile to humans.

algorithm: A set of instructions that allows a computer program to learn and analyze data in a certain way, such as recognizing patterns, and then learn from it and perform tasks independently.

alignment: Modifying an AI to better produce the desired outcome. This could be anything from moderating content to maintaining positive interactions with people.

anthropomorphism: When people tend to give non-human objects human characteristics. In AI, this can mean believing a chatbot is more human-like and conscious than it actually is, such as believing it is happy, sad, or even fully conscious.

artificial intelligence, or AI: The use of technology to simulate human intelligence, either in computer programs or robotics. A field of computer science that focuses on building systems that can perform human tasks.

autonomous agents: An AI model that has the capabilities, programming, and other resources to perform a specific task. For example, a self-driving car is an autonomous agent because it has sensory input, GPS, and driving algorithms to navigate the road on its own. Stanford researchers have shown that autonomous agents can develop their own cultures, traditions, and shared language.

prejudice: With respect to large language models, errors arising from the training data. This can result in the incorrect attribution of certain characteristics to certain races or groups based on stereotypes.

chatbot: A program that communicates with people using text that simulates human language.

ChatGPT: An AI chatbot developed by OpenAI that uses large-scale language model technology.

cognitive computing: Another term for artificial intelligence.

data augmentation: Remix existing data or add a more diverse set of data to train an AI.

deep learning: A method of AI, and a subfield of machine learning, that uses multiple parameters to recognize complex patterns in images, sound, and text. The process is inspired by the human brain and uses artificial neural networks to create patterns.

diffusion: A method of machine learning that takes an existing piece of data, such as a photograph, and adds random noise to it. Diffusion models train their networks to redesign or restore that photograph.

emerging behavior: When an AI model exhibits unintended skills.

End-to-end learning, or E2E: A deep learning process in which a model is instructed to perform a task from start to finish. It is not trained to perform a task sequentially, but instead learns from the input and solves it all at once.

ethical considerations: An awareness of the ethical implications of AI and issues related to privacy, data use, fairness, misuse, and other security concerns.

foom: Also known as fast start or hard start. The concept that if someone builds an AGI, it may already be too late to save humanity.

Generative adversarial networks, or GANs: A generative AI model that consists of two neural networks to generate new data: a generator and a discriminator. The generator creates new content and the discriminator checks whether it is authentic.

generative AI: A content-generating technology that uses AI to create text, video, computer code, or images. The AI ​​is given large amounts of training data, finds patterns to generate its own novel responses, which can sometimes be similar to the source material.

Google Gemini: An AI chatbot from Google that functions in a similar way to ChatGPT, but pulls information from the current web, while ChatGPT is limited to data and not connected to the internet until 2021.

crash barriers: Policies and restrictions imposed on AI models to ensure that data is handled responsibly and that the model does not create disruptive content.

hallucination: An incorrect answer from AI. May include generative AI that produces answers that are incorrect but are stated with certainty as if they were correct. The reasons for this are not fully understood. For example, when you ask an AI chatbot, “When did Leonardo da Vinci paint the Mona Lisa?” may respond with an incorrect statement with the text: “Leonardo da Vinci painted the Mona Lisa in 1815”, which is 300 years later than the actual painting.

large language model, or LLM: An AI model trained using large amounts of text data to understand language and generate new content in human-like language.

machine learning, or ML: A component of AI that allows computers to learn and make better predictive outcomes without explicit programming. Can be paired with training sets to generate new content.

Microsoft Bing: A Microsoft search engine that can now use the technology behind ChatGPT to provide AI-driven search results. It is similar to Google Gemini in its connection to the internet.

multimodal AI: A type of AI that can process multiple types of input, including text, images, videos, and speech.

natural language processing: A branch of AI that uses machine learning and deep learning to enable computers to understand human language. This often involves the use of learning algorithms, statistical models, and language rules.

neural network: A computational model that resembles the structure of the human brain and is designed to recognize patterns in data. It consists of interconnected nodes, or neurons, that can recognize patterns and learn over time.

overfitting: Error in machine learning where the machine fits the training data too closely and may only be able to identify specific examples in that data, but not new data.

paperclips: The Paperclip Maximiser Theory, invented by philosopher Nick Boström from the University of Oxford, is a hypothetical scenario in which an AI system creates as many literal paperclips as possible. In its goal to produce as many paperclips as possible, an AI system would hypothetically consume or convert any materials to achieve its goal. This could include dismantling other machines to produce more paperclips, machines that could be useful to humans. The unintended consequence of this AI system is that it could destroy humanity in its goal of creating paperclips.

parameters: Numerical values ​​that give LLMs structure and behavior, allowing them to make predictions.

fast: The suggestion or question you enter into an AI chatbot to get an answer.

fast chain: The ability of AI to use information from previous interactions to color future responses.

stochastic parrot: An analogy from LLMs that illustrates that the software has no greater understanding of the meaning behind language or the world around it, no matter how convincing the output sounds. The expression refers to how a parrot can mimic human words without understanding the meaning behind them.

style transfer: The ability to adapt the style of one image to the content of another, allowing an AI to interpret the visual features of one image and use them on another image. For example, taking Rembrandt’s self-portrait and re-creating it in the style of Picasso.

temperature: Parameters set to determine how random the output of a language model is. A higher temperature means the model takes more risks.

text to image generation: Creating images from text descriptions.

tokens: Small chunks of written text that AI language models process to formulate their responses to your prompts. A token is equal to four characters in English, or about three-quarters of a word.

training data: The datasets used to help AI models learn, including text, images, code, or data.

transformer model: A neural network architecture and deep learning model that learns context by tracking relationships in data, such as sentences or parts of images. So instead of analyzing a sentence word by word, it can look at the entire sentence and understand the context.

Turing test: Named after famed mathematician and computer scientist Alan Turing, it tests a machine’s ability to behave like a human. The machine passes if a human cannot distinguish the machine’s response from another human.

weak AI, also called narrow AI: AI that is focused on a specific task and cannot learn beyond its skills. Most AI today is weak AI.

learn zero shot: A test in which a model must complete a task without being given the required training data. An example is recognizing a lion when it has only been trained on tigers.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button