These invisible visual hack tricks top AI models to see nothing where there is a danger
- Advertisement -
- Advertisement -
- Risingattack quietly changes important functions that cheat AI without changing the appearance of the image
- Vision systems in self -driving cars can be blinded by almost invisible image adjustments
- The Dwaills Top AI models attack used in cars, cameras and health care diagnostics
Artificial intelligence is increasingly integrated into technologies that depend on visual recognition, from autonomous vehicles to medical imaging – but this increases utility also increases potential security risks, have warned experts.
A new method called RISINGATTACK could threaten the reliability of these systems by stopping what AI sees.
This can theoretically ensure that it lacks or identify objects, even when images resemble human observers.
Targeted deception by minimal image change
Risingattack was developed by researchers from North Carolina State University and is a form of opponents who subtly changes the visual input to mislead AI models.
The technology does not require large or clear image changes; Instead, it focuses on specific characteristics within an image that are essential for recognition.
“This requires some computing power, but enables us to make very small, targeted changes to the most important characteristics that make the attack successful,” said Tianfu Wu, university teacher Electric and Computer Technology and co-corresponding author of the study.
These carefully designed changes are completely non -detectable for human observers, making the manipulated images completely normal for the naked eye.
“The end result is that two images can look identical to human eyes, and we can clearly see a car in both images,” Wu explained.
“But because of risingattack, the AI would see a car in the first image, but would not see a car in the second image.”
This can endanger the safety of critical systems, such as those in self -driving cars, which depend on vision models to detect traffic signs, pedestrians and other vehicles.
If AI is manipulated not to see a stopboard or another car, the consequences can be serious.
The team tested the method against four frequently used vision architectures: Resnet-50, Densenet-121, Vitb and DEIT-B. All four were successfully manipulated.
“We can influence the ability of the AI to see one of the top 20 or 30 goals it has been trained to identify,” said Wu, referring to common examples such as cars, bicycles, pedestrians and stop signs.
Although the current focus is on Computer Vision, the researchers are already looking at wider implications.
“We are now working on determining how effective the technology is in attacking other AI systems, such as large language models,” Wu noted.
The long -term goal, he added, is not only to expose vulnerabilities, but to guide the development of safe systems.
“In the future, the goal is to develop techniques that can successfully defend against such attacks.”
As attackers continue to discover new methods to disrupt AI behavior, the need for stronger digital guarantees becomes more urgent.
By Techxplore
Maybe you like it too
- Advertisement -