Nobel Prize in Physics is awarded to two scientists for developing the methods that form the basis of today’s most POWERFUL AI tools
The 2024 Nobel Prize in Physics has been awarded to two scientists for developing the methods that lay the foundation for today’s powerful AI.
John Hopfield and Geoffrey Hinton received the prestigious award for ‘fundamental discoveries and inventions that enable machine learning with artificial neural networks.’
John Hopfield, of Princeton University, invented the first methods for machine learning systems to store and recreate patterns.
Geoffrey Hinton of the University of Toronto gave these networks the ability to find specific properties, allowing them to perform tasks such as recognizing elements in images.
These scientists’ discoveries paved the way for the artificial neural networks that power modern chatbots like ChatGPT.
The 2024 Nobel Prize in Physics has been awarded to John Hopfield and Geoffrey Hinton for developing the methods that form the basis for today’s powerful AI
Most modern artificial intelligences are based on a type of technology called artificial neural networks, which mimic the connections between neurons in the brain.
In AI, neurons are represented by nodes that influence each other through connections that can be made weaker or stronger, allowing AIs to learn over time.
Without this technology, the powerful systems that power everything from ChatGPT to Apple Intelligence would not be possible.
This year’s Nobel laureates have both played an important role in laying the foundation for these important advances since the 1980s.
Ellen Moons, chair of the Nobel Committee for Physics, says: ‘The work of the laureates has already yielded the greatest benefit.
‘In physics we use artificial neural networks in all kinds of areas, for example in the development of new materials with specific properties.’
John Hopfield was responsible for inventing a system called the ‘Hopfield Network’ that allows AI to store and recreate patterns.
With his background in physics, Hopfield wanted to understand how the individual neurons in the brain work together to create new abilities such as memory and reason.
Based on examples found in magnetic metals, Hopfield imagined that the neurons could be represented as a network of ‘nodes’ connected by connections of different strengths.
Today’s AIs use a system called artificial neural networks, which wouldn’t be possible without the work of Hopfield and Hinton
In his earliest work, those nodes store a value that is ‘1’ or ‘zero’, like the pixels in a black and white photo.
Hopfield found a way to describe these networks using a property called “energy,” which is calculated based on the value of all nodes and the strengths of the connection between them.
Using these equations, networks can be programmed by feeding them an image composed of black and white pixels and adjusting the connections between the nodes so that the stored image is low-energy.
When a new pattern is given to the network, the program checks each node to see if the system’s energy would be lower if it were changed from black to white or vice versa.
The Nobel Prize Committee for Physics awarded the two scientists the prize for their ‘fundamental discoveries’ that led to the development of machine learning
Following this rule, the network will check each node until it finally reproduces the original image.
What makes this technology so special is that you can store many different images in one network.
Whenever you give the network a new image, it always returns the most similar saved pattern.
You can think of this as shaping a landscape of peaks and valleys: when the network gets an image, it creates a valley in a virtual landscape where the valley floor has the lowest energy.
If you dropped a ball into this landscape, it would continue to roll downhill in the direction of lower energy until it was surrounded on all sides by hills – that valley would be the closest pattern to the input pattern.
That discovery opened up the possibility of networks that could recognize similarities between data.
John Hopfield discovered a way to store images in artificial networks, giving computers the ability to find the nearest stored image when provided with partially distorted data
Geoffrey Hinton received the Nobel Prize for creating the ‘Boltzmann Machine’, which expanded this concept in a revolutionary new way.
These machines use the Hopfield network as a basis, but give the network the new ability to recognize characteristic elements in a certain type of data.
Just as humans can recognize and interpret data according to categories, Hinton wanted to know if the same would be possible for machines.
To do this, Hinton and his colleague Terrence Sejnowski combined Hopfield’s energy landscapes with ideas from statistical physics.
These methods allow scientists to describe systems that have too many individual parts to track individually, such as the molecules that make up a gas cloud.
Even though we can’t keep track of all the components, we can describe some states in which they might occur as more likely and calculate these probabilities based on the amount of energy available.
Geoffrey Hinton (pictured) is often described as the ‘godfather of AI’ for his work creating the first ‘generative’ algorithms capable of learning from examples
Geoffrey Hinton received the award for his work in creating the Boltzmann Machine (illustrated) which extended Hopfield networks with ‘hidden’ layers that allowed them to learn from examples
Hinton’s breakthrough was to apply an equation from the nineteenth century physicist Ludwig Boltzmann, which describes this process, and apply it to a Hopfield network.
The resulting ‘Boltzmann machine’ has nodes like a Hopfield network, but also contains a layer of ‘hidden’ nodes.
The machine is controlled by updating the value of the nodes one by one until it finds a state in which the pattern of the nodes can change without changing the properties of the network as a whole.
This allows the machine to learn by being given examples of what you are looking for.
The machine can be trained by changing the values of its connections until the sample pattern has the highest probability of appearing on the ‘visible nodes’.
Fieldman and Hinton’s advances laid the foundation for the neural networks that power the most advanced modern AI (file photo)
AI chatbots like ChatGPT use artificial neural networks to power their massive systems. This would not have been possible without the fundamental research conducted by Fieldman and Hinton (file photo)
This allows the AI to recognize patterns in things it hasn’t seen before, just as you can immediately tell that a tiger is somehow related to your pet cat, even if you’ve never seen one before.
By stacking many of these networks together, we can create something that begins to resemble many of the AIs we recognize today.
For example, a simple Boltzmann machine can be used to recommend movies to you based on what you have previously enjoyed.
Although the field of AI has come a long way since Hopfield and Hinton made their first discoveries, their work has laid the foundation for some of the most important innovations in recent history.
On Monday, Victor Ambros and Gary Ruvkun received the 2024 Nobel Prize in Physiology or Medicine for their discovery of microRNAs.
The Nobel Prize for Chemistry will be announced tomorrow morning.