Use of Artificial Intelligence at GSA

Number: 2185.1C CIO
Status: Active
Signature Date: 03/11/2026
Expiration Date: 03/30/2029

Purpose:

This directive updates the governing policies for the responsible, efficient, and accelerated adoption of artificial intelligence (AI) technologies and platforms within the General Services Administration (GSA). It aligns with the principles outlined in Office of Management and Budget (OMB) Memorandums M-25-21, M-25-22, and M-26-04 by promoting innovation, enhancing public trust, and ensuring mission-enabling use of AI. The directive establishes standards for the assessment, procurement, usage, monitoring, and governance of AI systems, emphasizing risk management, transparency, and lifecycle accountability. It supports the development of agency-wide AI strategies, fosters cross-functional collaboration, and encourages the use of trustworthy, interoperable, and American-made AI solutions. All activities under this directive must comply with existing security, privacy, and ethics regulations and applicable laws.

Background:

The AI in Government Act of 2020 (Public Law 116-260); AI Training Act of 2023 (Public Law 117–207); OMB memoranda M-25-21, M-25-22, and M-26-04; and OMB Circular No. A-119 direct all federal agencies to:

  1. Accelerate the responsible adoption of AI by emphasizing innovation, governance, and public trust, in accordance with OMB M-25-21. Agencies must reduce bureaucratic barriers and promote mission-enabling AI that benefits the American public while safeguarding civil rights, civil liberties, and privacy.
  2. Empower Chief AI Officers (CAIOs) and other AI leaders to drive strategic planning, workforce development, and cross-agency collaboration. Agencies must establish AI governance boards and publish public AI strategies that identify barriers and outline plans for scaling responsible AI use.
  3. Ensure compliance with applicable federal laws and policies in the development, deployment, and use of AI and automated systems. This includes adherence to privacy, safety, and nondiscrimination standards and alignment with voluntary consensus standards as outlined in OMB Circular A-119.
  4. Establish and maintain processes to measure, monitor, evaluate, and report on AI use cases and their performance, especially for high-impact AI. Agencies must implement risk management practices and be prepared to terminate non-compliant systems.
  5. Conduct regular AI risk assessments, particularly for high-impact systems, and integrate findings into governance and acquisition decisions. Agencies must also contribute to interagency repositories of best practices and tools.
  6. Prioritize AI applications that advance the agency’s mission, improve service delivery, and promote innovation. Agencies should invest in AI-enabling infrastructure, including data governance, workforce training, and reusable tools and models.
  7. Ensure sufficient infrastructure and capacity for AI-ready data, including robust data curation, labeling, and stewardship practices. Agencies must manage data as a strategic asset to support trustworthy AI.
  8. Assess and plan for AI workforce needs by identifying required competencies, offering training (as mandated by the AI Training Act), and aligning hiring and reskilling strategies with evolving AI demands.
  9. Support interagency coordination and standards-setting initiatives, and encourage the adoption of voluntary consensus standards for AI, as directed by OMB Circular A-119 and M-25-21.
  10. Drive efficient and responsible AI acquisition by fostering a competitive U.S. AI marketplace, managing performance and risk throughout the acquisition lifecycle, and ensuring cross-functional engagement in procurement decisions.

Applicability:

This order applies to:

  1. All individuals, including GSA employees and contractors, who access, manage, share, or use data, including those involved in system-to-system data exchanges, and those participating in AI-related activities, training, or governance as defined under the AI Training Act of 2023.
  2. All IT systems owned or operated by or on behalf of any GSA Service and Staff Office that process, store, or transmit federal data, especially where AI capabilities are integrated or acquired, in accordance with OMB Memoranda M-25-21, M-25-22, and M-26-04.
  3. All federal data contained in or processed by GSA IT systems, including data used to train, evaluate, or operate AI systems, subject to applicable privacy, security, and intellectual property (IP) safeguards outlined in M-25-22.
  4. The Office of Inspector General (OIG) to the extent that participation aligns with the OIG’s independent authority under the Inspector General Act of 1978 (5 U.S.C. App. 3) and does not conflict with OIG policies or its mission.
  5. The Civilian Board of Contract Appeals (CBCA) only to the extent that participation is consistent with the CBCA’s requisite independence under the Contract Disputes Act (41 U.S.C. §§ 7101–7109) and its legislative history.
  6. All AI systems or services acquired by or on behalf of GSA, excluding common commercial products with embedded AI functionality not primarily used for AI purposes, as defined in OMB Memorandum M-25-22.

Standards and conformity assessments used in AI-related activities must align with OMB Circular A-119, which encourages the use of voluntary consensus standards and minimizes reliance on government-unique standards.

Cancellation:

This directive supersedes CIO 2185.1B, Use of Artificial Intelligence at GSA

Summary of Changes:

This revision makes changes to align with existing Executive Orders and OMB memoranda as referenced in the Background section.