The lesson from New York City requiring bias audits of hiring algorithms is that policymakers must regulate AI vendors.
New York City’s landmark law, known as NYC 144, requires employers to conduct annual bias audits of automated hiring tools they use. I testified in favor of an earlier version that imposed burdens on vendors of employment tools. In my view, exposing the various impacts of automated hiring tools to the public would be valuable information for employers, who are potential customers of the tools.
However, the law was changed to place the burden on the employer rather than the vendor. I do not know why. This is likely because the city has jurisdiction over the vendors, some of whom were not clear on whether they operate in many states. That’s probably because it wasn’t clear whether the city had jurisdiction over vendors operating in many states. I don’t know.
In any case, it was a mistake to place the burden solely on employers. The law took effect six months ago, and research by the public interest group Data and Society found that while many companies contracted out bias audits, most of them were not disclosed to the public. It seems there is.
Apparently, the public relations requirement only applies if the tool is actually used to make employment decisions. Those familiar with the adverse credit report requirements for employment decisions under the Fair Credit Reporting Act will recognize this issue. Only the employer can know whether the decision-making tool was actually used to make a decision.
Employers in New York City routinely make internal determinations that they are exempt from audit and disclosure requirements. It then waits for regulators and aggrieved applicants and employees to file enforcement lawsuits. New York City enforces the law on a complaint basis and has not yet taken any enforcement action. As a result, the algorithmic bias method, which was heralded as a model for other states and local governments, will most likely be ignored on the books and become a mere facade.
There are many possible ways forward, some of which were discussed in a December commentary in The Hill by Jacob Metcalf, author of the New York City law study mentioned above. But two things came to mind for me. The first is to hold vendors accountable for implementing and publicizing the disparate impacts of their employment tools. Second, by doing this on a national basis, it would prevent vendors from circumventing local laws by refusing to sell there, and it would eliminate questions about legal authority to regulate interstate commerce. be.
A big benefit of requesting a vendor audit is that information is put into the hands of employers, who can choose the hiring tools that best meet their hiring needs while considering the legal risks of violating employment discrimination rules. It’s possible.
If a tool typically recommends hiring 10 white candidates per 100 applicants, but only 2 black candidates per 100 applicants, that’s useful information for employers. Employers then discover that another tool recommends eight black candidates for every 100 applicants, and that using another tool allows employers to follow the Equal Employment Opportunity Commission’s 80 percent rule of thumb. You will find that your company will be in a better position to comply.
Bias audit conduct and disclosure requirements do not have, and should not have, a standard for unlawful and disparate influence. That is the fundamental purpose of discrimination law. If a hiring company believes there is a legitimate business reason to use a tool that selects just two out of every 100 black applicants, it is free to use that tool. But at least the company is aware of the legal risks involved in doing so.
Rather than placing the burden on all employers to implement and publicly disclose their use of automated employment tools, this disclosure requirement encourages vendors to create employment tools that avoid disparate impacts as much as possible. Apply pressure. It is natural for employers to seek all legal recourse to avoid such embarrassment. It is much more effective to use market incentives to force vendors to create fair employment tools.
Congress can draw lessons from this example for AI law and regulation. Agencies currently responsible for ensuring AI users comply with the law, such as his EEOC regarding employment and financial regulators regarding credit scores, have limited authority to impose disclosure and testing requirements on AI vendors. .

Last year, my former colleague at Brookings, Alex Engler, now in the White House, passed comprehensive legislation to strengthen the powers of existing agencies to address AI issues within their jurisdictions. requested Congress to do so. One element is giving these agencies the power to impose audit and disclosure rules on AI vendors.
As Congress seeks opportunities to regulate the rapidly changing field of artificial intelligence, it should seize the low-hanging fruit and move forward with legislation that would require bias audits and disclosures for AI vendors.
Mark McCarthy is the author of “Regulating the Digital Industry” (Brookings, 2023), an adjunct professor in the Communication, Culture, and Technology Program at Georgetown University, and a nonresident senior at the Institute for Technology Law and Policy at Georgetown Law School. Research Fellow, and Non-Resident Senior Research Fellow. at the Brookings Institution.
Copyright 2024 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.