Menu
Log in


INTERNATIONAL FOUNDATION FOR
CULTURAL PROPERTY PROTECTION

Log in

NIST Identifies Types of Cyberattacks That Manipulate Behavior of AI Systems

January 17, 2024 7:58 AM | Anonymous

Reposted from EMR-ISAC

Adversaries can deliberately confuse or even “poison” artificial intelligence (AI) systems to make them malfunction — and there’s no foolproof defense that their developers can employ. Computer scientists from the National Institute of Standards and Technology (NIST) and their collaborators identify these and other vulnerabilities of AI and machine learning (ML) in a new publication.

Their work, titled Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations (NIST.AI.100-2), is part of NIST’s broader effort to support the development of trustworthy AI, and it can help put NIST’s AI Risk Management Framework into practice.

The publication, a collaboration among government, academia and industry, is intended to help AI developers and users get a handle on the types of attacks they might expect along with approaches to mitigate them — with the understanding that there is no silver bullet.

See Original Post



  
 

1305 Krameria, Unit H-129, Denver, CO  80220  Local: 303.322.9667
Copyright © 1999 International Foundation for Cultural Property Protection.  All Rights Reserved