A hiring law paves the way for AI regulation

European legislators are finalizing an AI law. The Biden administration and leaders in Congress have their plans to rein in artificial intelligence. Sam Altman, the CEO of OpenAI, creator of the AI ​​sensation ChatGPT, recommended the creation of a federal agency with oversight and licensing authority last week in Senate testimony. And the topic came up at the Group of 7 summit in Japan.

Amid the sweeping plans and commitments, New York City has emerged as a humble pioneer in AI regulation.

The city government passed and passed a law in 2021 specific rules last month for one high-stakes application of the technology: hiring and promotion decisions. Enforcement will begin in July.

City law requires companies that use AI software when hiring candidates to inform candidates that an automated system is being used. It also requires companies to have independent auditors annually check the technology for bias. Candidates can request and be told what data will be collected and analysed. Companies are fined for violations.

New York City’s targeted approach represents an important front in AI regulation. At some point, the general principles developed by governments and international organizations, experts say, must be translated into details and definitions. Who is affected by the technology? What are the advantages and disadvantages? Who can intervene and how?

“Without a concrete use case, you’re not in a position to answer those questions,” said Julia Stoyanovich, an associate professor at New York University and director of the Center for Responsible AI.

But even before it goes into effect, the New York City law has been a magnet for criticism. Public interest advocates say it doesn’t go far enough, while corporate groups say it’s impractical.

The complaints from both sides point to the challenge of regulating AI, which is progressing at a breakneck pace with unknown consequences, fueling enthusiasm and fear.

Uncomfortable compromises are inevitable.

Ms. Stoyanovich is concerned that the city law has loopholes that could weaken her. “But it’s much better than not having a law,” she said. “And until you try to regulate, you don’t learn how.”

The law applies to companies with employees in New York City, but labor experts expect it to affect practices on a national level. At least four states — California, New Jersey, New York and Vermont — and the District of Columbia are also working on laws to regulate AI in hiring. And Illinois and Maryland have enacted laws restricting the use of specific AI technologies, often for workplace surveillance and applicant screening.

New York City law was born out of a clash of sharply opposing viewpoints. The City Council approved it during the final days of Mayor Bill de Blasio’s administration. Hearings and public comments, more than 100,000 words, came later – overseen by the city’s Department of Consumer and Worker Protection, the regulatory agency.

The result, some critics say, is overly sympathetic to business interests.

“What could have been a groundbreaking law was watered down to lose effectiveness,” said Alexandra Givens, president of the Center for Democracy & Technology, a policy and civil rights organization.

That’s because the law defines an “automated decision-making tool” as technology used “to substantially assist or replace discretionary decision-making,” she said. The rules passed by the city seem to interpret that wording narrowly, so that AI software only requires an audit if it is the sole or primary factor in a hiring decision or is used to override a human being, Ms Givens said .

That leaves out the main way the automated software is used, she said, with a hiring manager invariably making the final choice. The potential for AI-driven discrimination, she said, is usually in screening hundreds or thousands of candidates down to a handful or in targeted online recruiting to generate a pool of candidates.

Ms. Givens also criticized the law for limiting the types of groups that are measured for unfair treatment. It addresses prejudice based on gender, race and ethnicity, but not discrimination against older workers or people with disabilities.

“My biggest concern is that this will become the template nationally when we should be asking much more of our policymakers,” said Ms Givens.

The law was narrowed down to tighten it up and make sure it was targeted and enforceable, city officials said. The Council and the Office of Worker Protection heard many voices, including public interest activists and software companies. The goal was to make tradeoffs between innovation and potential harm, officials said.

“This is a significant regulatory success in ensuring that AI technology is used ethically and responsibly,” said Robert Holden, who chaired the Council’s technology committee when the law was passed and is still a member of the board. the Commision.

New York City is trying to address new technology in the context of federal labor laws with hiring guidelines that date back to the 1970s. The Equal Employment Opportunity Commission’s key rule states that no practice or selection method used by employers should have a “disparate impact” on a legally protected group such as women or minorities.

Companies criticize the law. In a filing this year, the Software Alliance, a trade group that includes Microsoft, SAP and Workday, said the requirement for independent audits of AI was “not feasible” because “the audit landscape is emerging,” lacking standards and professional oversight bodies.

But a burgeoning field is a market opportunity. Experts say the AI ​​audit business will only grow. It is already attracting law firms, consultants and start-ups.

Companies that sell AI software to help with hiring and promotion decisions have generally come to embrace the regulations. Some have already undergone external audits. They see the requirement as a potential competitive advantage and provide evidence that their technology increases the pool of applicants for companies and increases opportunities for employees.

“We believe we can comply with the law and show what good AI looks like,” said Roy Wang, general counsel of Eightfold AI, a Silicon Valley start-up that produces software used to help hiring managers.

New York City law also takes an approach to regulating AI that could become the norm. The main measure of the law is an ‘impact ratio’ or a calculation of the effect of using the software on a protected group of applicants. It doesn’t address how an algorithm makes decisions, a concept known as “explainability.”

Critics say that in life-defining applications, such as hiring, people have a right to an explanation of how a decision was made. But AI like ChatGPT-like software is getting more and more complex, which may put the goal of explainable AI out of reach, some experts say.

“The focus becomes the output of the algorithm, not the operation of the algorithm,” said Ashley Casovan, executive director of the Responsible AI Institute, which develops certifications for the safe use of AI applications in the workplace, healthcare, and industry. the financial world.

hiringlawpavesregulation
Comments (0)
Add Comment