Government announces completion of 150 days of action tasked with: President Biden’s groundbreaking executive order on AI
Today, Vice President Kamala Harris announced that the White House Office of Management and Budget (OMB) announced OMB’s first government-wide policy to reduce the risks and leverage the benefits of artificial intelligence (AI), and announced that President Biden’s announced that it would implement core elements of the policy. Groundbreaking AI Executive Order. This order strengthens the safety and security of AI, protects Americans’ privacy, promotes equity and civil rights, stands up for consumers and workers, fosters innovation and competition, and strengthens America’s reputation around the world. He ordered drastic actions to improve leadership. The federal agency reported that it has completed all 150 days of action imposed by the EO, building on its previous success in completing all 90 days of action.
This multifaceted directive to federal departments builds on the Biden-Harris Administration’s track record of ensuring the United States leads the way in responsible AI innovation. In recent weeks, OMB announced that the President’s budget invests in agencies’ ability to responsibly develop, test, procure, and integrate innovative AI applications across the federal government.
Consistent with the President’s Executive Order, OMB’s new policy directs the following actions:
Addressing risks through the use of AI
This guidance puts people and communities at the heart of the government’s innovation goals. Because of the role federal agencies play in our society, they have a clear responsibility to identify and manage AI risks, and the public needs to have confidence that their agencies will protect their rights and safety. .
By December 1, 2024, federal agencies will be required to put in place specific safeguards when using AI in ways that could impact the rights or safety of American citizens. These safeguards include a series of measures to ensure the impact of AI on the public is assessed, tested, and monitored, reduce the risk of algorithmic discrimination, and provide transparency to the public about how governments use AI. Includes enforcement measures. These safeguards apply to a wide range of AI applications, from health and education to employment and housing.
For example, by adopting these safeguards, government agencies can ensure that:
- Travelers can opt out of the use of TSA facial recognition when they are at the airport without delay or losing their place in line.
- When AI is used in the federal health system to support important diagnostic decisions, humans oversee the process of validating the tool’s results to avoid disparities in health care access.
- When AI is used to detect fraud in government services, impactful decisions will be subject to human oversight and affected individuals will have the opportunity to seek redress from AI for damages.
If an agency is unable to apply these safeguards, the agency may We must stop using AI systemsunless agency leadership can justify why doing so would increase the risk to overall safety or rights, or create an unacceptable disruption to the agency’s critical operations.
To protect federal employees as the government deploys AI, OMB policy requires agencies to consult with federal employee unions and adopt the Department of Labor’s future principles for mitigating the potential harms of AI to employees. We encourage you to do so. The Department also consults and leads by example with federal employees and unions both in developing these principles and in its own governance and use of AI.
The guidance also advises federal agencies on managing risks specific to AI procurement. Federal AI procurement presents unique challenges, and a strong AI market requires safeguards for fair competition, data protection, and transparency. Later this year, OMB will take steps to ensure that agency AI contracts align with his OMB policies and protect the rights and safety of the public from AI-related risks. The RFI issued today will gather input from the public on how to ensure that private companies supporting the federal government follow available best practices and requirements.
Expanding transparency in AI usage
The policy announced today calls for greater public transparency in the use of AI by publicly requiring federal agencies to:
- We will release an expanded annual inventory of AI use cases, including identifying use cases that impact rights and safety and how government agencies are addressing associated risks.
- Reports metrics on government AI use cases that are withheld from public inventories due to their sensitivity.
- Notify the public about AIs that are exempted from compliance with elements of OMB policy due to exemptions, along with the reasons why.
- Open government-owned AI code, models, and data. provided that such disclosure does not pose a risk to the public or government operations.
Today, OMB is also releasing detailed draft instructions for agencies detailing the content of this public report.
Driving responsible AI innovation
OMB’s policy would also remove unnecessary barriers to responsible AI innovation in federal agencies. AI technology offers significant opportunities to help government agencies address society’s most pressing challenges. Examples include:
- Tackling the climate crisis and responding to natural disasters. The Federal Emergency Management Agency is using AI to quickly investigate and assess structural damage from hurricane impacts, and the National Oceanic and Atmospheric Administration is using AI to more accurately predict extreme weather, flooding, and wildfires. We are developing.
- Advances in public health. The Centers for Disease Control and Prevention is using AI to predict the spread of diseases and detect illicit use of opioids. The Centers for Medicare and Medicaid Services uses AI to reduce waste and identify anomalies in drug costs.
- Protect public safety. The Federal Aviation Administration is using AI to deconflict air traffic in metropolitan areas and reduce travel times. The Federal Railroad Administration is also researching AI to predict hazardous railroad track conditions.
Advances in generative AI are expanding these opportunities, and OMB’s guidance encourages agencies to experiment with generative AI responsibly, with appropriate safeguards in place. Many agencies have already begun this journey, including the use of AI chatbots and other AI pilots to improve the customer experience.
Developing AI human resources
Building and deploying AI responsibly to serve the public starts with people. OMB guidance directs agencies to expand and upskill their AI talent pools. To advance AI risk management, innovation, and governance, government agencies are actively empowering the following workforce:
- The Biden-Harris Administration will hire 100 AI experts by summer 2024 to advance the reliable and safe use of AI as part of the National AI Talent Surge created by Executive Order 14110. We are committed to holding career fairs for AI roles across the United States. April 18th for the federal government.
- To further these efforts, the Office of Personnel Management issued guidance on pay and leave flexibility for AI roles to improve retention rates and emphasize the importance of AI talent across the federal government.
- The President’s budget for fiscal year 2025 includes an additional $5 million to expand the General Services Administration’s government-wide AI training program, which last year involved more than 7,500 participants from 85 federal agencies.
Strengthening AI governance
To ensure accountability, leadership, and oversight of the use of AI in the federal government, OMB policy requires federal agencies to:
- Appoint a chief AI officer to coordinate the use of AI across the agency. Since December, OMB and the Office of Science and Technology Policy have convened these officials regularly to establish a new Chief AI Officer Council to coordinate efforts across the federal government and prepare for implementation of OMB’s guidance. We are.
- Establish an AI Governance Committee, chaired by the Deputy Secretary or equivalent, to coordinate and manage the use of AI across the agency. As of today, the Department of Defense, Department of Veterans Affairs, Department of Housing and Urban Development, and states have established these governance bodies, and all agencies under the CFO Act must complete this by May 27, 2024. It is mandatory to do so.
In addition to this guidance, the government has announced several other measures to promote the responsible use of AI in government.
- OMB issues Request for Information (RFI) on Responsible Sourcing of AI in Governmentto inform future OMB actions to govern the use of AI under federal contracts.
- Agency expands 2024 Federal AI Use Case Inventory Report. Broadly expand public transparency about how the federal government uses AI.
- Government has committed to hiring 100 AI experts by summer 2024 As part of the National AI Talent Surge to advance the reliable and safe use of AI.
With these actions, the government is demonstrating that it is leading by example as a global model for the safe, secure and trustworthy use of AI. The policies announced today build on the administration’s Blueprint for an AI Bill of Rights and the National Institute of Standards and Technology’s (NIST) AI Risk Management Framework, and will advance federal accountability and oversight of AI and provide better protection to the public. This will increase transparency and promote responsible AI. Create a clear baseline to innovate for the public good and manage risk.
This also marks a significant milestone of 150 days since the publication of Executive Order 14110, and the table below provides an updated summary of the many activities that federal agencies have completed in response to the Executive Order.

###


