Back to glossary

Adversarial AI

Adversarial AI involves discreetly manipulating data entered into AI programs and machine learning models. These attacks can make AI systems behave unpredictably or incorrectly, typically producing mistakes or misclassifications. 

Adversarial AI attacks can pose significant risk and create serious consequences for businesses and consumers alike. For example, a two-inch piece of black tape placed on a speed limit sign tricked Tesla’s Mobileye EyeQ3 camera into relaying incorrect information into the vehicle’s autonomous driving feature. The Tesla then exceeded the speed limit by 50 mph.

In the digital world, adversarial AI attacks subtly alter digital images — often by just one pixel — to deceive AI-driven image recognition systems. These changes can lead systems to mislabel images, misidentify faces, or misinterpret visual data from sensors. Understanding and mitigating these risks is vital, particularly in sectors where AI decision-making is critical.

Stay in the know

Get VikingCloud Resources, News & Views delivered straight to your inbox.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Let's Talk

Get started with a VikingCloud cybersecurity and compliance assessment with our cybersecurity experts.
Contact Us