, , , , , , , , , , , , , , , ,

Hiring quotas are of questionable legal status, but for several years, some large companies have been adopting quota-like “targets” under the banner of Diversity, Equity and Inclusion (DEI) initiatives. Many of these so-called targets apply to the placement of minority candidates into “leadership positions”, and some targets may apply more broadly. Explicit quotas have long been viewed negatively by the public. Quotas have also been proscribed under most circumstances by the Supreme Court, and the EEOC’s Compliance Manual still includes rigid limits on when the setting of minority hiring “goals” is permissible.

Yet large employers seem to prefer the legal risks posed by aggressive DEI policies to the risk of lawsuits by minority interests, unrest among minority employees and “woke” activists, and “disparate impact” inquiries by the EEOC. Now, as Stewart Baker writes in a post over at the Volokh Conspiracy, employers have a new way of improving — or even eliminating — the tradeoff they face between these risks: “stealth quotas” delivered via artificial intelligence (AI) decisioning tools.

Skynet Smiles

A few years ago I discussed the extensive use of algorithms to guide a range of decisions in “Behold Our Algorithmic Overlords“. There, I wrote:

Imagine a world in which all the information you see is selected by algorithm. In addition, your success in the labor market is determined by algorithm. Your college admission and financial aid decisions are determined by algorithm. Credit applications are decisioned by algorithm. The prioritization you are assigned for various health care treatments is determined by algorithm. The list could go on and on, but many of these ‘use-cases’ are already happening to one extent or another.

That post dealt primarily with the use of algorithms by large tech companies to suppress information and censor certain viewpoints, a danger still of great concern. However, the use of AI to impose de facto quotas in hiring is a phenomenon that will unequivocally reduce the efficiency of the labor market. But exactly how does this mechanism work to the satisfaction of employers?

Machine Learning

As Baker explains, AI algorithms are “trained” to find optimal solutions to problems via machine learning techniques, such as neural networks, applied to large data sets. These techniques are are not as straightforward as more traditional modeling approaches such as linear regression, which more readily lend themselves to intuitive interpretation of model results. Baker uses the example of lung x-rays showing varying degrees of abnormalities, which range from the appearance of obvious masses in the lungs to apparently clear lungs. Machine learning algorithms sometimes accurately predict the development of lung cancer in individuals based on clues that are completely non-obvious to expert evaluators. This, I believe, is a great application of the technology. It’s too bad that the intuition behind many such algorithmic decisions are often impossible to discern. And the application of AI decisioning to social problems is troubling, not least because it necessarily reduces the richness of individual qualities to a set of data points, and in many cases, defines individuals based on group membership.

When it comes to hiring decisions, an AI algorithm can be trained to select the “best” candidate for a position based on all encodable information available to the employer, but the selection might not align with a hiring manager’s expectations, and it might be impossible to explain the reasons for the choice to the manager. Still, giving the AI algorithm the benefit of the doubt, it would tend to make optimal candidate selections across reasonably large sets of similar, open positions.

Algorithmic Bias

A major issue with respect to these algorithms has been called “algorithmic bias”. Here, I limit the discussion to hiring decisions. Ironically, “bias” in this context is a rather slanted description, but what’s meant is that the algorithms tend to select fewer candidates from “protected classes” than their proportionate shares of the general population. This is more along the lines of so-called “disparate impact”, as opposed to “bias” in the statistical sense. Baker discusses the attacks this has provoked against algorithmic decision techniques. In fact, a privacy bill is pending before Congress containing provisions to address “AI bias” called the American Data Privacy and Protection Act (ADPPA). Baker is highly skeptical of claims regarding AI bias both because he believes they have little substance and because “bias” probably means that AIs sometimes make decisions that don’t please DEI activists. Baker elaborates on these developments:

“The ADPPA was embraced almost unanimously by Republicans as well as Democrats on the House energy and commerce committee; it has stalled a bit, but still stands the best chance of enactment of any privacy bill in a decade (its supporters hope to push it through in a lame-duck session). The second is part of the AI Bill of Rights released last week by the Biden White House.

What the hell are the Republicans thinking? Whether or not it becomes a matter of law, misplaced concern about AI bias can be addressed in a practical sense by introducing the “right” constraints to the algorithm, such as a set of aggregate targets for hiring across pools of minority and non-minority job candidates. Then, the algorithm still optimizes, but the constraints impinge on the selections. The results are still “optimal”, but in a more restricted sense.

Stealth Quotas

As Baker says, these constrains on algorithmic tools would constitute a way of imposing quotas on hiring that employers won’t really have to explain to anyone. That’s because: 1) the decisioning rationale is so obtuse that it can’t readily be explained; and 2) the decisions are perceived as “fair” in the aggregate due to the absence of disparate impacts. As to #1, however, the vendors who create hiring algorithms, and specific details regarding algorithm development, might well be subject to regulatory scrutiny. In the end, the chief concern of these regulators is the absence of disparate impacts, which is cinched by #2.

About a month ago I posted about the EEOC’s outrageous and illegal enforcement of disparate impact liability. Should I welcome AI interventions because they’ll probably limit the number of enforcement actions against employers by the EEOC? After all, there is great benefit in avoiding as much of the rigamarole of regulatory challenges as possible. Nonetheless, as a constraint on hiring, quotas necessarily reduce productivity. By adopting quotas, either explicitly or via AI, the employer foregoes the opportunity to select the best candidate from the full population for a certain share of open positions, and instead limits the pool to narrow demographics.

Demographics are dynamic, and therefore stealth quotas must be dynamic to continue to meet the demands of zero disparate impact. But what happens as an increasing share of the population is of mixed race? Do all mixed race individuals receive protected status indefinitely, gaining preferences via algorithm? Does one’s protected status depend solely upon self-identification of racial, ethnic, or gender identity?

For that matter, do Asians receive hiring preferences? Sometimes they are excluded from so-called protected status because, as a minority, they have been “too successful”. Then, for example, there are issues such as the classification of Hispanics of European origin, who are likely to help fill quotas that are really intended for Hispanics of non-European descent.

Because self-identity has become so critical, quotas present massive opportunities for fraud. Furthermore, quotas often put minority candidates into positions at which they are less likely to be successful, with damaging long-term consequences to both the employer and the minority candidate. And of course there should remain deep concern about the way quotas violate the constitutional guarantee of equal protection to many job applicants.

The acceptance of AI hiring algorithms in the business community is likely to depend on the nature of the positions to be filled, especially when they require highly technical skills and/or the pool of candidates is limited. Of course, there can be tensions between hiring managers and human resources staff over issues like screening job candidates, but HR organizations are typically charged with spearheading DEI initiatives. They will be only too eager to adopt algorithmic selection and stealth quotas for many positions and will probably succeed, whether hiring departments like it or not.

The Death of Merit

Unfortunately, quotas are socially counter-productive, and they are not a good way around the dilemma posed by the EEOC’s aggressive enforcement of disparate impact liability. The latter can only be solved only when Congress acts to more precisely define the bounds of illegal discrimination in hiring. Meanwhile, stealth quotas cede control over important business decisions to external vendors selling algorithms that are often unfathomable. Quotas discard judgements as to relevant skills in favor of awarding jobs based on essentially superficial characteristics. This creates an unnecessary burden on producers, even if it goes unrecognized by those very firms and is self-inflicted. Even worse, once these algorithms and stealth quotas are in place, they are likely to become heavily regulated and manipulated in order to achieve political goals.

Baker sums up a most fundamental objection to quotas thusly:

Most Americans recognize that there are large demographic disparities in our society, and they are willing to believe that discrimination has played a role in causing the differences. But addressing disparities with group remedies like quotas runs counter to a deep-seated belief that people are, and should be, judged as individuals. Put another way, given a choice between fairness to individuals and fairness on a group basis, Americans choose individual fairness. They condemn racism precisely for its refusal to treat people as individuals, and they resist remedies grounded in race or gender for the same reason.”

Quotas, and stealth quotas, substitute overt discrimination against individuals in non-protected classes, and sometimes against individuals in protected classes as well, for the imagined sin of a disparate impact that might occur when the best candidate is hired for a job. AI algorithms with protection against “algorithmic bias” don’t satisfy this objection. In fact, the lack of accountability inherent in this kind of hiring solution makes it far worse than the status quo.