Back to glossary

Adversarial AI

Adversarial AI involves discreetly manipulating data entered into AI programs and machine learning models. These attacks can make AI systems behave unpredictably or incorrectly, typically producing mistakes or misclassifications. 

Adversarial AI attacks can pose significant risk and create serious consequences for businesses and consumers alike. For example, a two-inch piece of black tape placed on a speed limit sign tricked Tesla’s Mobileye EyeQ3 camera into relaying incorrect information into the vehicle’s autonomous driving feature. The Tesla then exceeded the speed limit by 50 mph.

In the digital world, adversarial AI attacks subtly alter digital images — often by just one pixel — to deceive AI-driven image recognition systems. These changes can lead systems to mislabel images, misidentify faces, or misinterpret visual data from sensors. Understanding and mitigating these risks is vital, particularly in sectors where AI decision-making is critical.

Stay in the Know

Get VikingCloud Resources, News & Views delivered straight to your inbox.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Blogs

Stay up-to-date on the latest happenings in Cybersecurity and PCI Compliance.

Mar 13, 2026
Blog
Threat Detection and Response
Cybersecurity
Blog
Mar 13, 2026

MDR vs EDR: What’s the Difference and Which Do You Need?

Learn More
Mar 9, 2026
Blog
Cybersecurity
Blog
Mar 9, 2026

From Security Spend to Risk Reduction: Measuring the Business Value of Risk Assessments

Learn More
Mar 9, 2026
Blog
Cybersecurity
Data Security
Compliance
PCI Compliance
PCI DSS
Blog
Mar 9, 2026

Consulting Team Spotlight: Fayyaz Makhani

Learn More