Impact of AI on public safety
The policy defines several uses of AI that could impact public safety and human rights, and requires government agencies to put safeguards in place by December 1. . Safeguards must include ways to reduce the risk of algorithmic discrimination and provide citizens with transparency in government. Utilization of AI.
Government agencies must stop using AI that fails to meet safeguards. The public must be notified and justified of any AI that is exempt from compliance with OMB policies.
AI that controls dams, power grids, traffic control systems, vehicles, and robotic systems in the workplace is AI that impacts safety. On the other hand, AI that blocks or removes protected speech, creates risk assessments of individuals for law enforcement, and performs biometric authentication would be classified as impacting rights. AI decisions regarding health care, housing, employment, medical diagnoses, and immigration status also fall into the rights-impacting category.
OMB policy also requires agencies to release government-owned AI code, models, and data when release does not pose a risk to the public or government operations.
The new policy received mixed reviews from some human rights and digital rights groups. The American Civil Liberties Union argued that the policy is an important step toward protecting U.S. residents from AI abuse. But the policy has major holes, including broad exceptions for national security systems and intelligence agencies, the ACLU said. There are also exceptions to this policy for confidential law enforcement information.
“Although the federal government’s use of AI that undermines rights and safety should not be allowed, harmful and discriminatory uses of AI by national security agencies, state governments, and others remain,” said Cody Wehnke, ACLU senior policy advisor. It’s almost never checked.” statement. “Policymakers must work to close these gaps and create the protections we deserve.”


