Log in


Log in

Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence

May 06, 2024 12:16 PM | Anonymous

Reposted from CISA/DHS

The U.S. Department of Homeland Security (DHS) released two new resources to mitigate and understand risks posed by AI: 1) Guidelines to mitigate AI risks posed to critical infrastructure and a 2) new report that evaluates the potential for AI to be misused to enable the development or production of Chemical, Biological, Radiological, and Nuclear threats. DHS, in coordination with CISA, developed the new safety and security guidelines to address cross-sector AI risks.

WASHINGTON – Today, the Department of Homeland Security (DHS) marked the 180-day mark of President Biden’s Executive Order (EO) 14110, “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (AI)” by unveiling new resources to address threats posed by AI: (1) guidelines to mitigate AI risks to critical infrastructure and (2) a report on AI misuse in the development and production of chemical, biological, radiological, and nuclear (CBRN).

These resources build upon the Department’s broader efforts to protect the nations’ critical infrastructure and help stakeholders leverage AI, which include the recent establishment of the Artificial Intelligence Safety and Security Board. This new Board, 
announced last week, assembles technology and critical infrastructure executives, civil rights leaders, academics, state and local government leaders, and policymakers to advance responsible development and deployment of AI.

“AI can present transformative solutions for U.S. critical infrastructure, and it also carries the risk of making those systems vulnerable in new ways to critical failures, physical attacks, and cyber attacks. Our Department is taking steps to identify and mitigate those threats,” said Secretary of Homeland Security Alejandro N. Mayorkas. “When President Biden tasked DHS as a leader in the safe, secure, and reliable development of AI, our Department accelerated our previous efforts to lead on AI. In the 180 days since the Biden-Harris Administration’s landmark EO on AI, DHS has established a new AI Corps, developed AI pilot programs across the Department, unveiled an AI roadmap detailing DHS’s current use of AI and its plans for the future, and much more. DHS is more committed than ever to advancing the responsible use of AI for homeland security missions and promoting nationwide AI safety and security, building on the unprecedented progress made by this Administration. We will continue embracing AI’s potential while guarding against its harms.”

DHS, in coordination with its Cybersecurity and Infrastructure Security Agency (CISA), released new safety and security guidelines to address cross-sector AI risks impacting the safety and security of U.S. critical infrastructure systems. The guidelines organize its analysis around three overarching categories of system-level risk:

  • Attacks Using AI:  The use of AI to enhance, plan, or scale physical attacks on, or cyber compromises of, critical infrastructure.
  • Attacks Targeting AI Systems: Targeted attacks on AI systems supporting critical infrastructure.
  • Failures in AI Design and Implementation: Deficiencies or inadequacies in the planning, structure, implementation, or execution of an AI tool or system leading to malfunctions or other unintended consequences that affect critical infrastructure operations.

“CISA was pleased to lead the development of ‘Mitigating AI Risk: Safety and Security Guidelines for Critical Infrastructure Owners and Operators on behalf of DHS,” said CISA Director Jen Easterly. “Based on CISA’s expertise as National Coordinator for critical infrastructure security and resilience, DHS’ Guidelines are the agency’s first-of-its-kind cross-sector analysis of AI-specific risks to critical infrastructure sectors and will serve as a key tool to help owners and operators mitigate AI risk."

To address these risks, DHS outlines a four-part mitigation strategy, building upon the National Institute of Standards and Technology’s (NIST) AI Risk Management Framework (RMF), that critical infrastructure owners and users can consider when approaching contextual and unique AI risk situations:

  • Govern: Establish an organizational culture of AI risk management - Prioritize and take ownership of safety and security outcomes, embrace radical transparency, and build organizational structures that make security a top business priority.
  • Map: Understand your individual AI use context and risk profile - Establish and understand the foundational context from which AI risks can be evaluated and mitigated.
  • Measure: Develop systems to assess, analyze, and track AI risks - Identify repeatable methods and metrics for measuring and monitoring AI risks and impacts.
  • Manage: Prioritize and act upon AI risks to safety and security - Implement and maintain identified risk management controls to maximize the benefits of AI systems while decreasing the likelihood of harmful safety and security impacts.

Countering Chemical, Biological, Radiological, and Nuclear Threats

The Department worked with its Countering Weapons of Mass Destruction Office (CWMD) to analyze the risk of AI being misused to assist in the development or production of CBRN threats, and analyze and provide recommended steps to mitigate potential threats to the homeland. This report, developed through extensive collaboration across the United States Government, academia, and industry, furthers long-term objectives around how to ensure the safe, secure, and trustworthy development and use of artificial intelligence, and guides potential interagency follow-on policy and implementation efforts.

“The responsible use of AI holds great promise for advancing science, solving urgent and future challenges, and improving our national security, but AI also requires that we be prepared to rapidly mitigate the misuse of AI in the development of chemical and biological threats,” said Assistant Secretary for CWMD Mary Ellen Callahan. “This report highlights the emerging nature of AI technologies, their interplay with chemical and biological research and the associated risks, and provides longer-term objectives around how to ensure safe, secure, and trustworthy development and use of AI.  I am incredibly proud of our team at CMWD for this vital work which builds upon the Biden-Harris Administration’s forward-leaning Executive Order.”

See Original Post


1305 Krameria, Unit H-129, Denver, CO  80220  Local: 303.322.9667
Copyright © 2015 - 2018 International Foundation for Cultural Property Protection.  All Rights Reserved