The White House announced that groundbreaking policy The bill requires federal agencies to identify and mitigate potential risks of artificial intelligence (AI) and underscores the government’s commitment to the responsible deployment of AI technologies.
Under the new rules, each federal agency must appoint a chief AI officer within 60 days. This person will be responsible for coordinating the AI implementation and ensuring compliance with policies.
Government agencies will also need to create detailed and publicly accessible inventories of their AI systems. These inventories highlight use cases that can potentially impact safety and civil rights, such as AI-powered healthcare and law enforcement decision-making.
The policy builds on President Joe Biden’s October policy. Executive Order on AIoutlined a wide range of measures to promote safe and responsible AI development across sectors.
“When government agencies use AI tools, we will require them to verify that those tools do not endanger the rights and safety of Americans,” Vice President Kamala Harris said. said on the phone Announcement of new measures.
Government agencies have until December to implement safeguards for AI applications that could impact the rights or safety of Americans. This includes providing clear opt-out options for technologies such as facial recognition and ensuring transparency around how AI systems reach their conclusions. Agencies that fail to implement these safeguards will need to cease using the relevant AI systems or obtain special justification from senior leadership.
Biometrics under the microscope
One focus of the new policy is attempts to reduce algorithmic discrimination, flaws in computer systems that create inequality, and discrimination based on legally protected characteristics such as race and gender. The Office of Management and Budget (OMB) requires federal agencies to proactively assess, test, and monitor potential harm posed by AI systems to ensure they do not perpetuate bias against certain demographics. obligate them to do so.
An example of how the new policy protects individuals can be seen in its impact on travelers. The Transportation Security Administration (TSA) uses: Facial recognition technology has been documented to be proven. Correct answer rate is low For people with dark skin. The new AI policy directly addresses this concern by giving travelers the right to opt out of facial recognition scans. This opt-out option allows individuals to choose an alternative identity verification process that does not rely on potentially biased technology.
“While TSA’s use of facial recognition certainly speeds up the identity verification process and brings an additional layer of security to travel, it also raises significant privacy and security concerns.” centificthe global provider of AI and data services told PYMNTS.
“The important thing here is that facial recognition systems are used in a way that is accurate (no false positives), transparent, and accountable to travelers. Ensuring public engagement is also essential to building trust and confidence in the use of facial recognition and other AI technologies.”
The Department of Homeland Security (DHS), the TSA’s parent agency, has been using facial recognition for some time, said co-founder and chief technology officer Kurt Roloff. duality technologyhas been identified by PYMNTS as a technology startup for privacy-preserving analytics and collaboration on sensitive data. For example, Customs and Border Protection (CBP) has installed facial recognition technology at airports to simplify the entry process for Americans returning from international travel.
“DHS in general, and TSA in particular, promotes the responsible use of privacy technologies that protect the rights of citizens while maintaining security, and DHS is at the forefront of implementing and using privacy technologies,” he said. added.
Mohamed Razouni, CTO, biometrics company knowemphasized that the new regulation emphasizes the need for organizations to thoroughly educate users about biometrics by providing transparent options to consent or decline.
“In most cases, the desire for convenience will prevail and most people will choose a biometric method,” he added. “A great example: airports around the world are using biometrics to allow passengers to board their planes in a fraction of the time it would take using standard ID, allowing passengers to board their planes more quickly. We highly value your admission.”
Chief AI Officer is put to the test
The federal government’s new AI policy is ambitious, said Jennifer Gill, vice president of product marketing. skyhawk securityA cybersecurity firm specializing in AI integration for cloud security told PYMNTS, adding that the new rules must be implemented correctly to be effective.
“Addressing bias is extremely important, especially in the example of veterans health care,” Gill said. “Government agencies need to continuously monitor models to ensure that healthcare goals are met. This application requires models to be evaluated and tested daily, which is burdensome for agencies. It may be too big, but it absolutely should be. The cost of using AI and the cost of maintaining AI must be carefully scrutinized.”
One point of contention could be the provision for appointing an agency’s chief AI officer. Mr. Gill pointed out potential problems with this policy and emphasized the need for uniform standards across agencies.
“When each chief AI officer manages and monitors the use of AI at the discretion of each agency, it creates inconsistencies, gaps, and vulnerabilities,” Gill added. “These vulnerabilities in AI can be exploited for a variety of illicit uses. Inconsistent management and oversight of AI use puts the entire federal government at risk.”
Enforcement of rules
While AI regulations look comprehensive on paper, they can be difficult to implement and enforce, said Lisa Donnan, a partner at a cybersecurity firm. Option 3he told PYMNTS. He emphasized the need for effective compliance monitoring and penalties for violations to prevent abuse.
“However, too strict regulations can stifle innovation, so a balance needs to be struck to promote security without impeding technological progress,” she added.
Relying solely on internal assessment and monitoring can leave AI management vulnerable. Gal Ringel, Co-Founder and CEO mytold PYMNTS, a global data privacy management company. “While we understand the security concerns, an independent third party would be better suited to perform AI-related assessments, and a specific government agency may need to be established to do so. .”
Ringel recently pointed to Utah State. Establish your own AI law, a departure from recent federal efforts. He said the move would set a precedent that would allow states to enact their own AI regulations, similar to data privacy laws.
“We need federal legislation to oversee the private sector, and while it does not need to take the same risk-based approach as the EU or UK, meaningful legislation needs to be enacted that promotes the same transparency and no harm principles. “Reduction and responsible use are reflected in today’s announcement,” he added.