WASHINGTON, D.C. – The Department of Homeland Security issued Policy Statement 139-06 on August 8, 2023, titled “Acquisition and Use of Artificial Intelligence and Machine Learning Technologies by DHS Components,” establishing the department’s first formal guidance for AI procurement and deployment. The policy was issued pursuant to Executive Order 13960 “Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government,” which requires federal agencies to create and make publicly available an inventory of non-classified and non-sensitive AI use cases.

Executive Order 13960, issued in December 2020, requires federal agencies to inventory their AI use cases and share their inventories with other government agencies and the public, with the Office of Management and Budget charged with issuing instructions to agencies for collection, reporting, and publication of AI use case information. The executive order mandates that within 180 days of initial guidance, and annually thereafter, each agency shall prepare an inventory of its non-classified and non-sensitive use cases of AI, including current and planned uses.

The policy was superseded in January 2025 by Directive 139-08, signed by DHS Undersecretary for Management Randolph Alles, after the department had been “taking its first steps at institutionalizing AI operations, including launching a chief AI officer role”. This replacement after 16 months indicates the original policy required substantial revision for operational implementation.

Executive Order Requirements

The 2023 policy established requirements that “DHS systems, programs, and activities using AI will conform to the requirements of Executive Order 13960, Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government” and mandated that “DHS will only acquire and use AI in a manner that is consistent with the Constitution and all other applicable laws and policies”.

Executive Order 13960 requires agencies to “design, develop, acquire, and use AI in a manner that fosters public trust and confidence while protecting privacy, civil rights, civil liberties, and American values, consistent with applicable law”. The executive order outlines nine principles agencies must follow when designing, developing, acquiring, and using AI in the federal government, with trustworthy AI defined as design and use that fosters public trust while protecting privacy, civil rights, and civil liberties.

The order requires agencies to identify, review, and assess existing AI deployed in support of agency missions for inconsistencies, develop plans to achieve consistency within 120 days of completing inventories, and implement approved plans within 180 days of plan approval.

DHS AI Use Case Implementation

DHS has been publicly disclosing AI use cases annually since 2022, as first required by Executive Order 13960, with the 2024 inventory including 158 active use cases compared to 67 total use cases in 2023. The increase reflects “a clear and consistent definition for an AI use case and clarifies potential misunderstandings about AI use at DHS, removing prior inconsistencies and confusing information,” as previous inventories included “some ideas for AI use that were never implemented and entries for technologies that were not actually AI”.

The Government Accountability Office found that “DHS’s inventory of AI systems for cybersecurity is not accurate,” specifically noting that “the inventory identified two AI cybersecurity use cases, but officials told us one of these two was incorrectly characterized as AI”. GAO determined that “DHS has a process to review use cases before they are added to the AI inventory, the agency acknowledges that it does not confirm whether uses are correctly characterized as AI”.

DHS provides the inventory “in accordance with the Advancing American AI Act (December 2022) and Executive Orders 13960,” with the Office of Management and Budget issuing guidance annually for the AI Use Case Inventory, and DHS updating its inventory accordingly.

Operational Restrictions

The policy prohibited DHS from collecting, using, or disseminating data used in AI activities “based on the inappropriate consideration of race, ethnicity, gender, national origin, religion, sexual orientation, gender identity, age, nationality, medical condition, or disability”. However, the document provided no definition of “inappropriate consideration” or mechanisms for detecting violations.

Additional restrictions stated that “DHS will not use AI to improperly profile, target, or to discriminate against any individual, or entity, based on the individual characteristics identified above, as reprisal or solely because of exercising their Constitutional rights” and “DHS will not use AI technology to enable improper systemic, indiscriminate, or large-scale monitoring, surveillance, or tracking of individuals”.

The policy required DHS to “develop, adopt, and apply a suitable enterprise risk management framework approach to AI, considering existing Federal and non-governmental risk management frameworks” with application to “evaluate all use cases early in their life cycle to assess risk across a broad range of Departmental and public equities”.

GAO Assessment of DHS AI Implementation

GAO assessed DHS’s implementation of AI accountability practices and found that “DHS fully implemented four of the 11 key practices and implemented five others to varying degrees in the areas of governance, performance, and monitoring”. However, DHS “did not implement two practices: documenting the sources and origins of data used to develop the PII detection capabilities, and assessing the reliability of data, according to officials”.

GAO’s AI Framework calls for “management to provide reasonable assurance of the quality, reliability, and representativeness of the data used in the application, from its development through operation and maintenance,” noting that “addressing data sources and reliability is essential to model accuracy”.

GAO made eight recommendations to DHS, including expanding its review process to verify the accuracy of AI inventory submissions and fully implementing key AI Framework practices such as documenting sources and ensuring data reliability. DHS concurred with all eight recommendations.

Implementation Timeline

The policy directed the Chief Information Officer and Under Secretary for Science and Technology to “establish an AI Policy Working Group (AIPWG)” in consultation with the Chief Procurement Officer, Officer for Civil Rights and Civil Liberties, Chief Privacy Officer, and Under Secretary for Strategy, Policy, and Plans.

Within two weeks of issuance, DHS components were required to “identify a senior career employee or servicemember with appropriate technical expertise to participate in the AIPWG” and provide “an updated inventory of current use cases of AI within their respective Component” and “an accounting of all planned use-cases of AI within their respective Component”.

The policy mandated completion of formal directives “no later than 12 months after the publication of this Policy Statement”. This deadline was not met, as the replacement directive was issued 16 months later, suggesting implementation challenges during the working group process.

Evolution to Directive 139-08

The January 2025 replacement directive established explicit prohibited uses, including “relying on AI outputs as the sole basis for law enforcement actions (like arrests, searches, seizures, or issuing citations), civil enforcement actions (such as fines or injunctions), or denial of government benefits”. This operational specificity was absent from the 2023 policy statement.

Directive 139-08 created formal governance structures including a “DHS AI Governance Board” and “DHS AI Council” with defined membership and responsibilities, and established “AI Incident Reporting and Response” procedures for managing problems involving AI use.

DHS announced its “first policies for responsible AI use in September 2023,” with subsequent OMB Memo M-24-10 in March 2024 providing “government-wide requirements for AI risk management, as directed in President Biden’s AI Executive Order,” noting that “where requirements differed between DHS’s internal AI policies and M-24-10, we met the higher standard”.

The rapid replacement of Policy Statement 139-06 demonstrates the challenges federal agencies face in implementing AI governance frameworks that balance operational requirements with oversight obligations. Executive Order 13960 established clear mandates for AI inventory reporting and trustworthy AI principles, but translating these requirements into operational policies required multiple iterations and external oversight to achieve compliance.

Our analysis examined DHS Policy Statement 139-06 issued August 8, 2023, Executive Order 13960 from December 2020, GAO report GAO-24-106246 from February 2024, and subsequent DHS AI inventory reporting requirements across the Department of Homeland Security’s implementation of federal AI governance mandates.

Key Takeaways

Related Articles

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.