As law enforcement agencies, from the CIA to the IRS to local police departments, begin to use AI technology in a variety of ways, it is imperative that Congress devise a framework that works for Americans everywhere to protect their privacy and civil rights. It becomes.
Some forward-thinking lawmakers have introduced bills such as the AI Assessment Act by Sen. Michael Bennet (D-Colo.) and the TAG Act by Sen. Gary Peters (D-Mich.). We have already begun this positive process. Lankford (R-Okla.) and Mike Brown (R-Ind.). Unfortunately, other members of Congress have introduced jump-start bills that would over-regulate AI in the private sector. Congress needs to focus on limiting the government’s ability to use these tools to violate constitutional rights.
The red light camera case provides valuable insight into how courts have addressed due process issues in the context of automated enforcement systems. The use of red light cameras has been, and still is, controversial. They’re especially prevalent on the Washington, D.C., beltway, where one wrong turn can be costly to your driving record and your wallet. Proponents argue that deterrence improves road safety and that enforcement is civil rather than criminal, while opponents say it violates civil liberties and violates due process rights.
It is a cornerstone of our constitutional rights that individuals be given notice and an opportunity to be heard before being deprived of life, liberty, or property. Red light camera case law addresses this due process issue, with courts considering whether using camera photos as evidence violates an individual’s Sixth Amendment right to confront an accuser. are doing.
This traditionally involves cross-examining witnesses in court. Critics argue that using automatic camera photos as evidence without allowing cross-examination of the individuals who control the records and systems associated with the cameras violates this constitutional right.
Similarly, the federal government must carefully consider how AI-generated evidence is collected, stored, and presented to ensure that individuals have a meaningful opportunity to challenge it.
Another fundamental principle of criminal law is the presumption of innocence. The Supreme Court called this “axiomatic, elementary, and indisputable law.” This requires that individuals be presumed innocent until proven guilty beyond a reasonable doubt. The red light camera case intersects with this principle by raising the question of who bears the burden of proof when AI-generated complaints are used in law enforcement proceedings.
Is the existence of a photo an accusation that the defendant must refute, or is he presumed innocent unless someone can testify to what he observed? This parallels the question of how complaints generated entirely by AI, or those generated by initial AI analysis, should be treated within the framework of the presumption of innocence.
Using camera photos as evidence without allowing individuals to cross-examine those responsible for maintaining records and systems related to red light cameras would violate the right to confront the accuser . Individuals should be given the opportunity to challenge the accuracy and reliability of these automated systems through cross-examination. This becomes even more important when considering the potential use of machine learning algorithms developed based on “black box” methodologies that prevent meaningful investigation of the underlying processes that lead to conclusions.
For example, consider a hypothesis. The Federal Bureau of Investigation will create a black box AI model that examines cell phone metadata, online browsing history, and other public information about protester groups. It then generates a model that identifies similar individuals and applies it to an AI schema that trawls internet traffic to flag potential risks in other countries. Without the ability to examine how this model alerts individuals to increased surveillance, how can defendants mount an effective defense against law enforcement charges?
Without Congressional intervention, this could easily be the future for Americans. As AI plays an increasingly prominent role in generating complaints to law enforcement, it is important to address how this intersects with individuals’ rights to confront their accusers. Balancing technological advances with fundamental constitutional rights requires careful oversight and thoughtful legislation.
These principles, derived from red flag case law, should inform the analysis of several federal agencies that have already adopted AI technologies in a variety of capacities.
The Internal Revenue Service uses AI algorithms to detect tax fraud and streamline tax return processing. The Department of Labor employs AI-powered tools for 18 different use cases, including claims analysis and document verification. This can eventually extend to alerting you of potential risks to OSHA or WHD complaints. The CIA is leveraging AI for data analysis and intelligence gathering. Again, it’s easy to imagine how this could be easily integrated with PRISM-style programs to achieve unprecedented levels of monitoring.
It is clear that the federal government’s use of AI is only accelerating and expanding into a variety of use cases, with no guardrails in place.
Instead of introducing legislation to stifle AI innovation, lawmakers should put pressure on federal agencies to increase oversight and transparency regarding their current and future use of AI. This new insight provides policymakers with insight into best practices and potential pitfalls when implementing restrictions on the federal government’s use of AI and developing other related legislation.
Congress and the White House should work together to develop a clear framework and guardrails for the development, deployment, and use of AI to inform enforcement efforts across the federal government. If left unchecked, this problem will only grow, given the incentives to increase enforcement levels without requiring commensurate staffing.
Nick Johns is a senior policy and government affairs manager at the National Taxpayers Alliance, a nonprofit organization dedicated to defending taxpayer interests at all levels of government.
Copyright 2024 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.