Date: Sep 30, 2025
Prepared by: Zachary Whitman
1. GSA AI use cases
Provide examples of significant agency AI use cases currently in use or planned to be in use.
GSA organizes its AI activities into three tiers to reflect their level of integration, technical complexity, and mission impact. This structure provides a clear roadmap for scaling AI from broad cultural adoption to deep programmatic transformation, while highlighting high-impact and rights-sensitive applications that require heightened oversight.
1.1 Tier 1 — Chatbot-Based Use Cases (Enterprise Access and Cultural Adoption)
Tier 1 encompasses AI capabilities delivered through the USAi general chatbot, providing every GSA employee with secure, enterprise-level access to generative AI tools. These use cases focus on broad adoption, productivity gains, and knowledge management. Examples include:
- General Enterprise Support: Employees use the chatbot to draft documents, summarize meeting notes, generate first-draft code, and retrieve policy guidance, reducing time spent on routine tasks.
- Customer Experience Assistance: Public-facing pilots test chatbots that provide plain-language answers to common inquiries about federal programs and GSA services.
- Training and Onboarding: New employees leverage the chatbot to navigate GSA policies, IT procedures, and benefits information, accelerating onboarding and reducing help-desk demand.
1.2 Tier 2 — Application Programming Interface (API) Use Cases (Programmatic Integrations and Mission Delivery)
Tier 2 covers API-enabled services built on the USAi platform to support direct mission functions, strategic improvement efforts, and deeper automation. These integrations allow programs to call large language models securely within agency applications. Examples include:
- Acquisition Document Generation: Automated drafting of procurement language, market research summaries, and acquisition strategies to reduce cycle times while maintaining compliance.
- Data Quality Enhancement: Model-driven detection and correction of errors in large administrative datasets, improving the accuracy of reports and analytics.
- Agentic Workflows: AI-powered “co-pilots” embedded in business systems that chain tasks together (e.g., compiling regulatory references, validating inputs, and generating recommendations) to support contracting officers and program managers.
- Bias Impact Analysis: Advanced natural language and statistical models are used to study potential algorithmic bias in federal services and procurement processes, informing policy decisions and action plans.
1.3 Tier 3 — Integration Use Cases (Embedded AI in Existing Tools and High-Impact Applications)
Tier 3 includes AI features that are embedded directly into existing enterprise platforms or third-party tools, including applications with heightened privacy or civil rights considerations. Examples include:
- Login.gov Face-Matching Technology: Use of facial matching software to support secure identity verification for public authentication services. This high-impact use case is subject to additional testing, human review, and continuous monitoring to safeguard privacy and prevent algorithmic discrimination.
- AI-Enhanced IT Service Management: Natural language classification and routing of help-desk tickets to speed issue resolution.
- Generative Features in Productivity Suites: Secure enablement of built-in generative AI functions in productivity software (e.g., document drafting, spreadsheet analysis) with GSA-specific guardrails.
- Facilities and Property Management Optimization: Predictive analytics integrated into building management systems to forecast energy usage and improve maintenance scheduling.
Through this tiered approach, GSA ensures that AI adoption progresses responsibly and strategically—starting with low-risk, high-value enterprise services that enable broader enterprise adoption and workforce upskilling (Tier 1), advancing to program-specific integrations that leverage a service-based architecture while maintaining all requisite system and data controls (Tier 2), and finally embedding AI directly into mission systems and high-impact applications to increase the direct value and service delivery of those systems (Tier 3).
2. GSA AI Maturity Goals
Provide an assessment of GSA’s current state of AI maturity and a plan to achieve GSA’s AI maturity goals in the following key areas.
2.1 Current State of AI Maturity
GSA’s AI journey began with a series of pilots and research initiatives designed to explore the emergent and evolving landscape of enterprise AI adoption within a federal context. Early efforts focused on understanding the agency’s workforce—identifying current skillsets, assessing readiness, and mapping opportunities to augment day-to-day activities with AI capabilities. The initial priority was “drudge reduction”: using AI to automate repetitive tasks so employees could concentrate on higher-value mission work.
These early pilots revealed several key needs:
- Enterprise availability of tools — AI capabilities must be accessible to all employees, allowing staff to experiment, adopt, and integrate tools when they are most relevant to their work.
- Training and cultural support — GSA invested in training opportunities and community-building, collaborating with its AI Community of Practice to offer agency-wide learning sessions and hosting initiatives such as “Friday Demo Days,” where employees share their generative AI projects to inspire peer adoption.
- Access to cutting-edge technology — GSA worked with industry partners to minimize the procurement timelines for market-leading AI tools, resulting in OneGov agreements that enable GSA and other agencies to purchase AI technologies and platforms at scale.
- Accelerated security authorization — the FedRAMP program launched the “20x” initiative, which expedites the review and approval of generative AI platforms so agencies can safely access the same tools and services used in the private sector.
- Shared services to drive adoption across government — GSA launched USAi, a government-wide AI platform and service model that provides agencies with a secure environment to test, adopt, and scale AI capabilities. USAi helps agencies evaluate cultural readiness, integration opportunities, and technical requirements, enabling data-driven decisions about future enterprise deployment. GSA is consuming the USAi platform service offering in its enterprise adoption journey.
2.2 Path to Maturity
Building on these foundations, GSA’s AI maturity plan focuses on three reinforcing objectives:
- Agency-wide enablement — Continue to expand secure, enterprise-level access to AI tools and platforms so every employee can leverage AI in mission delivery.
- Workforce development — Broaden training, guidance, and community engagement to ensure employees at all skill levels can responsibly and effectively use AI.
- Government-wide leadership — Advance shared services like USAi and FedRAMP’s 20x program to help other agencies rapidly adopt secure, state-of-the-art AI solutions.
Through these efforts, GSA is moving from pilot-driven experimentation to sustained, enterprise-wide adoption, while also helping other federal agencies achieve similar AI maturity.
2.3 AI-enabling Infrastructure
Describe GSA’s plan to develop AI-enabling infrastructure across the AI lifecycle including development, testing, deployment, and continuous monitoring.
GSA is developing a secure, scalable, and government-wide AI infrastructure that supports every phase of the AI life cycle—from development and testing to deployment and continuous monitoring. The goal is to provide an environment where AI tools can be responsibly created, evaluated, and used across GSA and by other federal agencies, while maintaining strong safeguards for security, privacy, and equity.
2.3.1 Development and Testing
- Enterprise Data Solution (EDS): GSA’s central data platform serves as the foundation for AI experimentation, model use, and development. EDS provides curated, well-governed datasets, standardized metadata, and controlled access to ensure that AI models are trained and evaluated on high-quality, secure data.
- Research Environments: Dedicated R&D workspaces allow teams to prototype and test AI models in a secure setting before any production deployment. These environments include sandboxed compute resources, version control for models, and automated documentation requirements to promote reproducibility and transparency.
- Testing and Evaluation Tools: All AI systems undergo structured evaluation—using agency-defined test plans, AI Impact Statements, and real-world context testing—to measure accuracy, robustness, equity, and safety before promotion to production.
- USAi Console: The USAi console provides a unified evaluation environment that captures model performance, safety telemetry, and bias metrics across multiple commercial models. Agencies can run standardized test suites, compare model outputs side-by-side, and export evaluation data to support internal reviews or external audits.
2.3.2 Deployment and Operations
- USAi Shared Service: GSA’s USAi platform provides agencies with a FedRAMP-authorized environment to deploy and operate generative AI solutions. USAi offers both a chatbot interface and API access, enabling agencies to integrate AI into mission workflows without the need to build their own infrastructure.
- Secure Cloud Architecture: AI deployments leverage multi-tenant cloud services with built-in encryption, identity management, and logging. Deployment pipelines incorporate automated security scanning and configuration baselines aligned with federal cybersecurity standards.
- 20x FedRAMP Initiative: To ensure agencies can access market-leading AI platforms, GSA is leading an accelerated FedRAMP review process that shortens authorization timelines for generative AI tools while maintaining rigorous security standards.
2.3.4 Continuous Monitoring and Risk Management
- Centralized Governance: The AI Governance Board (known as the EDGE board) and AI Safety Team oversee all production systems, requiring ongoing performance monitoring, periodic human review, and annual re-registration of AI use cases.
- Telemetry and Logging: USAi and EDS provide detailed telemetry on model usage, API consumption, and system performance to detect drift, bias, or anomalous behavior.
- Incident Response and Privacy Controls: Integrated workflows connect AI operations with cybersecurity and privacy teams, ensuring rapid response to security incidents and compliance with federal privacy requirements.
Through this layered infrastructure, GSA is creating a repeatable and secure pathway for agencies to develop, test, deploy, and continuously monitor AI solutions—enabling innovation at scale while upholding the highest standards of safety, transparency, and trust.
2.4 Data
Describe your agency’s plan to ensure access to quality data for AI and data traceability.
GSA’s AI strategy is grounded in the principle that high-quality, well-governed data is the foundation of trustworthy AI. The agency is building policies, platforms, and tools that ensure data used for AI is accurate, traceable, and reusable—both within GSA and across the Federal Government.
2.4.1 Access to Quality Data
- Enterprise Data Solution (EDS): GSA’s enterprise data platform serves as the central catalog for all datasets used in AI development. All AI projects must register their data assets in EDS, including details on provenance, quality, and sensitivity, ensuring that every dataset can be reviewed, approved, and monitored.
- Data Quality Controls: System owners must document data collection methods, preparation processes, and quality measures. Data intended for AI development is required to meet standards for representativeness and coverage to reduce bias and improve model reliability.
2.4.2 Data Sharing and Reuse
- Open and Shared Data Assets: Consistent with the OPEN Government Data Act, GSA publishes qualified datasets to Data.gov and promotes interagency reuse of AI-ready datasets, models, and evaluation tools where privacy and security permit.
- Standard Agreements and Frameworks: GSA applies reusable data-sharing agreements and governance templates to streamline cross-agency access to AI-relevant data, reducing the time required to establish new exchanges.
- Governmentwide Collaboration: Through initiatives such as USAi and OneGov acquisition vehicles, GSA provides a path for other agencies to leverage shared data resources, model artifacts, and best practices without duplicating infrastructure.
2.4.3 Reproducibility, Traceability, Explainability, and Model Lineage
- Reproducibility, Data, and Model Lineage: Reproducibility is reinforced through mandatory metadata in EDS, which records the full provenance of training and testing datasets, preprocessing steps, and model versions. This ensures that evaluation results can be replicated and that any downstream decisions can be traced back to their source data and configurations. All AI systems must document the origin, transformations, and use of data throughout the model life cycle. Metadata in EDS captures details such as data sources, preprocessing steps, training runs, and model versions.
- Transparency and Interpretability: GSA requires AI Impact Statements and system test plans to include explainability considerations, ensuring that model decisions can be audited and contested. AI-generated data products are labeled and indexed to clearly identify machine-generated content.
- Continuous Monitoring: Production AI systems undergo periodic reviews to detect data drift, model degradation, or privacy risks, with mechanisms in place to retrain or retire models when quality standards are not met.
Through these measures, GSA ensures that data used in AI development is discoverable, trustworthy, and accountable, enabling responsible innovation while supporting government-wide sharing and public transparency.
2.5 AI Ready-Workforce
Describe your agency’s plan to recruit, hire, train, retain, and empower an AI-ready workforce and achieve AI literacy for non-practitioners involved in AI.
GSA is advancing a comprehensive plan to recruit, train, retain, and empower an AI-ready workforce that can both build AI systems and responsibly govern their use. The agency recognizes that AI success depends not only on technical experts, but also on broad AI literacy among non-technical staff who shape policy, procurement, and service delivery.
2.5.1 Recruiting and Hiring Technical Talent
- Targeted AI and Data Roles: GSA is expanding hiring for data scientists, machine-learning engineers, human-centered designers, cybersecurity specialists, and evaluation experts to develop scalable, secure AI solutions for mission delivery and shared services like USAi.
- Flexible Hiring Authorities: The agency uses special pay rates, direct-hire authorities, and fellowship programs to compete for in-demand AI talent, while partnering with the U.S. Digital Service and Presidential Innovation Fellows to bring in experienced technologists.
- Interagency Collaboration: GSA coordinates with OPM and the Chief AI Officers Council to share candidate pools, reduce hiring friction, and attract talent motivated by public-sector impact.
2.5.2 Training and Upskilling the Existing Workforce
- Agency-Wide AI Literacy: All employees have access to foundational AI training covering responsible use, data privacy, and basic model capabilities. Training is delivered through online courses, live workshops, and hands-on “Friday Demo Days” where staff share AI prototypes and lessons learned.
- Role-Specific Learning Paths: Tailored curricula provide deeper instruction for developers, product managers, acquisition professionals, and legal staff, including courses on model evaluation, prompt engineering, procurement of AI technologies, and algorithmic bias mitigation.
- Communities of Practice: GSA’s AI Community of Practice hosts regular meet-ups, peer learning sessions, and office hours to build internal networks and spread best practices across business lines and regions.
2.5.3 Retention and Empowerment
- Career Growth and Recognition: GSA provides clear advancement paths for AI professionals, opportunities to rotate across high-impact projects, and recognition programs that highlight innovative AI contributions to mission outcomes.
- Embedded Safety and Ethics Expertise: AI practitioners work alongside privacy, and security officers to ensure that technical staff gain experience with responsible AI design and oversight, making GSA an attractive environment for mission-driven technologists.
2.5.4 Priority Application Areas
Technical talent will focus on:
- Developing and scaling the USAi shared service to provide secure, government-wide access to generative AI tools.
- Enhancing the EDS to improve data quality, metadata, and AI model traceability.
- Building evaluation and monitoring frameworks that measure safety, bias, and performance across diverse AI models.
- Supporting mission-facing applications in acquisition, customer experience, and federal property management to deliver measurable taxpayer value.
Through these actions, GSA is cultivating a workforce that not only possesses advanced AI skills but also embodies the principles of transparency, accountability, and public trust—ensuring that AI investments translate into secure, efficient, and fair services for the American people.
2.6 Research and Development
Describe GSA’s efforts to provide AI tools and capacity to support the agency’s AI research and development (R&D) efforts.
GSA does not maintain a dedicated AI research laboratory or separate line of R&D funding in the manner of a science-focused agency, but it actively fosters AI innovation through applied research, pilot programs, and shared infrastructure that enable experimentation and rapid learning. The agency’s approach emphasizes practical R&D that directly informs mission delivery and government-wide adoption.
2.6.1 AI R&D Platforms and Tools
- Enterprise Data Solution (EDS): GSA’s secure, enterprise data environment provides curated datasets, version control, and sandboxed compute resources to support model development and testing. AI projects use EDS to experiment with algorithms, assess data quality, and document model lineage before any production deployment.
- USAi Shared Service: GSA’s USAi platform offers agencies an environment for generative AI exploration, including a chatbot interface and API access for custom R&D use cases. USAi allows GSA and partner agencies to test multiple commercial foundation models and evaluate safety, bias, and performance without building duplicative infrastructure. GSA is preparing joint evaluation efforts with the National Institute of Standards and Technology’s AI Safety Institute and the Cybersecurity and Infrastructure Security Agency’s red-teaming initiative to benchmark frontier models for robustness, adversarial resistance, and bias. These partnerships will allow USAi users to incorporate government-wide safety tests directly into their R&D workflows.
- Evaluation and Safety Harnesses: All R&D efforts are paired with structured evaluation plans, impact statements, and real-world context testing to measure accuracy, robustness, and equity while ensuring compliance with federal privacy and security requirements.
2.6.2 Innovation Through Partnerships and Pilots
- Industry Engagement: GSA collaborates with commercial AI providers through market research, demonstrations, and pilot agreements to evaluate emerging technologies and workforce trainings. These efforts inform acquisition strategies and feed lessons learned into government-wide procurement vehicles such as the OneGov AI deals.
- FedRAMP 20x Initiative: In partnership with the FedRAMP program, GSA is piloting an accelerated authorization process to make leading AI platforms available for R&D and operational use across government, ensuring agencies can access the same cutting-edge capabilities as the private sector.
- Cross-Agency Collaboration: Through USAi, GSA provides R&D capacity to other agencies, enabling them to test models, share evaluation data, and develop AI adoption strategies in a secure, multi-tenant environment. The USAi platform also opens up the possibility for tenant agencies to share models, lessons learned, and data assets should the tenant agencies demand those capabilities.
2.6.3 Complementary Innovation Methods
Even without a traditional research lab, GSA drives innovation by embedding R&D principles into day-to-day operations. Internal programs such as “Friday Demo Days,” hackathons, and targeted pilot projects create opportunities for employees to prototype AI solutions, share findings, and scale successful approaches across the enterprise.
Through these efforts, GSA AI research and experimentation will remain integral to mission delivery, creating a pipeline of tested, secure, and cost-effective solutions that can be rapidly transitioned from concept to government-wide deployment.
2.7 Governance and Risk Management
Describe GSA’s plan to develop enterprise capacity for AI innovation.
GSA is building enterprise readiness for AI innovation through a layered governance framework that promotes safe experimentation while enforcing rigorous risk controls. Under the agency’s AI governance directive, the Chief Artificial Intelligence Officer (CAIO) maintains agency-wide visibility of AI activities and chairs both the AI Governance Board and the AI Safety Team.
- AI Governance Board (known as the EDGE Board) — Co-chaired by the Chief Data Officer (CDO) and the Deputy Administrator, this body sets enterprise risk tolerance, approves high-impact use cases, and integrates AI oversight with GSA’s enterprise risk management program.
- AI Safety Team — A cross-functional working group empowered to review every AI request, assess risk, and enforce privacy and security controls. The team adjudicates use cases across familiarization, pre-acquisition, research and development, and production categories before deployment.
To strengthen enterprise capacity, GSA requires all high-impact use cases to submit AI Impact Statements, independent evaluation plans, and real-world test results prior to deployment. Approved systems undergo continuous monitoring, human-in-the-loop validation, and annual re-registration, with thresholds for human review and mitigation of emergent risks. Every production system must also obtain an Authorization to Operate, complete privacy assessments, and be publicly disclosed in an AI use-case inventory.
This structure enables GSA to scale innovation—through pilots, dedicated R&D environments such as the EDS, and shared services like USAi —while ensuring that new AI capabilities are tested, validated, and continually evaluated for safety, fairness, and mission impact.
Describe your agency’s plan to develop the necessary operations, governance, and infrastructure to manage risks from the use of AI, including risks related to information security and privacy.
GSA integrates AI risk management with existing data governance, privacy, cybersecurity, and enterprise risk programs to manage information-security and privacy risks across the AI life cycle.
- Coordinated oversight — The CAIO, Chief Information Security Officer, Chief Privacy Officer, and Data Governance Leads jointly review all production or production-intent AI systems. AI enhancements to existing IT tools trigger re-authorization within the agency’s security framework.
- Data safeguards — All datasets used for design, training, testing, and operation must be registered in the EDS catalog and adhere to internal data-sharing and sensitivity requirements. Sensitive data cannot be used without explicit clearance and a valid Authorization to Operate.
- Incident response — Any cybersecurity or privacy incident involving AI requires re-submission of the use case for reassessment within strict timelines.
- Privacy and equity protections — Covered AI systems must proactively mitigate algorithmic discrimination, provide human alternatives or fallback options, and publish plain-language notices of AI use.
Through this integrated governance model, GSA ensures that every AI system—whether internally developed or commercially procured—is aligned with federal directives, FedRAMP security controls, and GSA’s enterprise risk management framework, enabling responsible adoption of AI while protecting privacy, and mission integrity.
2.8 Resource Tracking and Planning
Describe GSA’s plan to identify, track, and facilitate future AI investment or procurement.
GSA is implementing a structured approach to identify, track, and plan future AI investments and procurements to ensure that resources align with mission priorities and deliver measurable value to the taxpayer. This approach combines enterprise-wide visibility into AI activities with standardized budgeting and acquisition practices, allowing the agency to manage costs, monitor usage, and scale successful solutions.
2.8.1 Identification of AI Investments
- Central AI Inventory: All AI use cases—whether pilots, R&D efforts, or production systems—must be registered in GSA’s enterprise AI inventory. This inventory captures key information on funding sources, technical scope, data requirements, and risk profiles, providing a single view of AI activities across the agency.
- Enterprise Risk and Budget Integration: The AI Governance Board reviews proposed investments alongside GSA’s enterprise risk management process to ensure that funding decisions reflect both mission impact and risk tolerance.
2.8.2 Tracking and Cost Transparency
- USAi and Platform Telemetry: GSA’s USAi platform provides detailed usage and cost telemetry, enabling real-time tracking of model consumption, API calls, and user activity. This data informs budget planning and allows program offices to forecast future demand and allows agencies to weigh the value propositions of consumption-based AI platforms and license-based platform business models.
- Annual Re-Registration and Reporting: All production or production-intent AI systems are re-evaluated annually for compliance, cost efficiency, and continued mission relevance, ensuring that resource allocations remain aligned with agency goals.
2.8.3 Planning for Future Investments
- Acquisition Readiness: GSA applies standardized acquisition strategies, such as the OneGov AI agreements, to streamline procurement of commercial AI products and services across government. These vehicles allow for rapid scaling of proven solutions and reduce duplication of effort.
- Accelerated Authorization: Through the FedRAMP 20x initiative, GSA works to expedite the security review of generative AI platforms, enabling faster deployment of market-leading technologies while maintaining rigorous safeguards.
- Data-Driven Budget Forecasting: Insights from the AI inventory, telemetry, and evaluation processes feed into multi-year budget planning to anticipate resource needs for emerging AI opportunities and government-wide shared services.
By combining comprehensive tracking, cost transparency, and forward-looking acquisition planning, GSA ensures that AI investments are strategically prioritized, fiscally responsible, and well positioned to deliver scalable benefits across the agency and the broader federal enterprise.