Affirmative action for androids?
Androids may face discrimination from humans in the near to distance future. Do they deserve affirmative action?
The prospect of androids of equal or greater intelligence than humans isn’t hard to fathom. It may be that such a world never comes to be. However, in light of the prominent role science fiction plays in feeding our imaginations, it is hard not to consider the very real possibility that society will someday be comprised of humans and androids living side by side. And it isn’t hard to see how such events could occur: it may be androids were invented by humans originally to act as servants and sources of entertainment. Fictional accounts like Westworld paint a bleak picture of where technological progress may lead: humans eventually displaced by androids they created. This has happened before: Homo habilis, the evolutionary predecessor of modern Homo sapiens unwittingly seeded its own demise by way of sexual selection and technological innovation and were eventually replaced by Homo sapiens. There is the factory worker who builds a machine that replaces him on the factory floor. The professor who creates educational tools that make her obsolete to the university.
Similarly, it is reasonable that androids, if technologically possible, would be able to cognitively outperform humans in many ways, if not every way. Imagine a futuristic world of humans and androids live side-by-side, but where humans still largely control society: they control the levers of power, outnumbering the androids. In a world with unchecked android advantage, humans would have strong incentives to keep the android revolution at bay.
It is easy to dismiss the possibility that androids would deserve any greater moral consideration than a cellphone; but if the androids in question are self-aware, capable of suffering, and have projects and values which matter to them, in such a world, androids would be slaves at worst, and second-class citizens at best. Such an arrangement wouldn’t survive: some humans would feel bad for androids, and androids would resist.
We should begin examining the question of affirmative action for androids by first answering a couple prior questions, namely whether androids would count morally, and what conditions, if any, would justify affirmative action for androids.
Do Androids Count Morally?
Androids wouldn’t warrant affirmative action if they lacked moral standing. We don’t think, for instance, desktop computers and cellphone deserve affirmative action; they just aren’t the kinds of things that have moral standing and deserve fair treatment. What would treating a cellphone ‘fairly’ look like? For something to even qualify as the subject of fairness considerations, it must have moral standing. As the philosopher Christopher Morris explains:
[The] metaphor of the moral community is an interesting one. It makes possession of moral standing analogous to the political status of citizenship. Like membership in the political community, membership in “the moral community” gives one a particular status that non-members lack, in particular a set of rights. This understanding of moral standing connects it with the notion of legal standing … [Something] has moral standing if it is owed duties. This understanding of moral standing connects it with the notion of legal standing; both are conceptions of a status that entitles the holder to something[1].
To put the point simply: to have moral standing is to deserve moral consideration for one’s own sake. So it must be that for androids to be appropriate recipients of affirmative action, they must have moral standing. Androids of the kind we have in mind would likely count as persons (put aside whether androids like that are technologically possible; we’re only evaluating the ethical implications if they are). If androids are persons—self-aware, rational, capable of suffering, with a point of view—they would deserve moral treatment on par with humans. And if androids are persons, they would possess moral standing and deserve fair treatment.
This suggestion faces an obstacle: many people suspect androids aren’t the kind of thing that could, even in principle, have moral standing. They think that androids aren’t the kind of thing that could warrant moral concern, unlike humans who do warrant such concern. An intuition at work in the background is that androids are synthetic, and thus couldn’t be persons. However, it isn’t clear why their constitution would bear on the question of their moral status; even if they are made of silicon, say, and humans are made of meat, these differences appear incidental to the question of their moral status. Put differently: if androids and humans are relevantly similar ‘on the inside,’ then the question of their material constitutional is immaterial. There clearly are possible androids that are similar enough to humans ‘on the inside’ (psychologically) such that they warrant similar moral treatment.
We won’t spend too much time trying to establish there are at least possible androids that would deserve to be treated as persons, but maybe a thought experiment will illustrate why we should take the idea seriously. Suppose you have a rare cellular disease that is slowly killing you. Upon consulting with a doctor, it turns out there is a radical new treatment available: slowly over the course of time, one’s body is replaced bit-by-bit with synthetic parts and silicon chips. However, the change happens so gradually, over the course of months and years, so that from your point of view the changes are invisible to you: from your psychological point of view you remain, with the same memories, values, personality, and so forth. And at the end of the transformation, even though you are constituted similar to an android, it seems from your perspective that you’ve survived. Presumably the story we’ve told isn’t impossible; what should be clear from the story is that you preserved your moral standing throughout the procedure. And if so, it cannot be that your moral standing is the product of the kind of stuff you’re made of. And the same holds of androids: if they a similar psychology—self-awareness, the ability to suffer, rationality, stuff that matters to them, and so on—to humans, they warrant similar moral considerations.
We are thus left with the distinct possibility that androids could warrant moral consideration if they have the right stuff ‘on the inside.’ Androids in a situation like the one described at the start of the chapter deserve to be treated as morally on par with their human counterparts. And this opens the possibility that androids would require affirmative action.
Would Androids Need Affirmative Action?
Even granting androids and humans have similar moral standing there is still the issue of what would justify special treatment; if anything their similar moral standing would appear to buck against special treatment with policies such as affirmative action. There are a couple reasons to think that in a world like the one described above there would be a compelling moral case for android affirmative action, namely that androids are made worse by the actions of humans; secondly, humans are not psychologically ‘wired’ to treat androids fairly, even though androids and humans have similar moral standing. We should begin by distinguishing between different varieties of affirmative action.
When people hear the phrase ‘affirmative action’ they often construe it in terms of identity over qualification: in the case of racial affirmative action, it is tempting to frame it as a policy favoring black applicants over white applicants, even if the former is less qualified than the latter. This is why when the subject of affirmative action is broached people express their opposition to such a policy with roughly the idea that a company should hire ‘the best man for the job.’ But this is a woefully poor understanding of the policy. While is true that on a strong version of affirmative action, minority applicants are favored over white applicants in virtue of their identity, even if they are less qualified; but hardly anyone defends strong affirmative action[2]. Even if we think ‘the best man for the job’ (antiquated phrase aside) is a good heuristic, it doesn’t tell us how to handle cases where, say, black and white applicants are equally qualified for the job in question. On weak affirmative action, the minority status of an applicant favors them only to the extent that it breaks a tie between them and an equally qualified white applicant[3]. On this version of affirmative action, we end up with qualified applicants for the job, but where past and on-going discrimination favor the minority candidate.
While many may still balk at this for still running afoul of fairness, they often balk because they think appeals to minority status, as a tie breaker between comparably qualified job applicants, remains unfair; it would be better in such cases to, say, flip a coin. But this is only right to the extent that applicants are on equal footing in every other respect. Those who favor affirmative action often reject this assumption: they hold centuries of discrimination toward minorities are responsible for putting them at a disadvantage—either because of historical trends that are still felt up to the present or disadvantage embedded in the current system—that affirmative action aims to correct. Critics often dispute the factual premise at work by arguing such unfairness is no longer a large enough factor to warrant favoring minorities in the workplace. This response is revealing: the implication is that if discrimination were a fact, there would be a good case for weak affirmative action. As the philosopher Alan Goldman writes:
The rule for hiring the most competent was justified as part of a right to equal opportunity to succeed through socially productive effort, and on grounds of increased welfare for all members of society. Since it is justified in relation to a right to equal opportunity, and since the application of the rule may simply compound injustices when opportunities are unequal elsewhere in the system, the creation of more equal opportunities takes precedence when in conflict with the rule for awarding positions. Thus short-run violations of the rule are justified to create a more just distribution of benefits by applying the rule itself in future years[4].
These points were made in the debate over affirmative action for black Americans, but it could equally apply to androids. If the system were rigged against androids who have similar moral standing to their human counterparts, then we may need a policy change (even if temporarily) to address the inequalities. The standard objection to affirmative action—it unfairly ignores the merits of the individual candidates—doesn’t have the same bite against weak affirmative action. On a weak version of affirmative action, candidates would be equally qualified, while considering the different paths each candidate took to be considered for the job. This objection would have greater bite if each candidate had to overcome similar obstacles to be considered for the job; but that isn’t often the case. The devil is in the details, but this at least shows weak affirmative action need not be unfair in favoring members of disadvantages groups who likely face background disadvantages. Why think androids may require affirmative action? There are a couple reasons:
First, we imagined a futuristic world where androids were originally created to serve humans, but were eventually ‘freed’ from servitude because of the recognition that they had the same moral standing as their human counterparts. However, in such a world it is plausible androids would be discriminated against because the humans would fear them for their superior cognitive abilities; better to keep them down than to risk switching places with them. This is a common theme in human history: we have the impulse to ‘keep our boot’ on the throats of those whom we fear are a danger to us. There are numerous mundane examples of this from supervisors fearing they will be replaced by more capable employees, or parents who envy their children for doing the things they wished they had but couldn’t. It wouldn’t be surprising then to find that humans in the future would be motivated to band together to keep androids from forcing humans into secondary societal roles they once occupied. And given that humans are remarkably good at banding together in the face of an external threat, it isn’t unthinkable that humans in the future would be better at cooperating than androids in the minority so that they could remain in power over the androids despite their cognitive disadvantages.
Second, humans won’t likely trust androids[5]. There is something unnerving about being stared at by an android[6]. Research in psychology on potential human-android interactions reveals that humans are highly sensitive to subtle changes in gaze that can trigger a sense of unease in humans. There is likely an evolutionary explanation for this: humans, unlike other primates, have white parts of their eyes that allow third-parties to better track gaze; this is because gaze is often an unwitting guide to one’s goals and aims. And that feature of human psychology is likely a large obstacle to robust relationships between humans and androids; of course, it doesn’t rule them out entirely, but it makes them significantly more challenging.
This point dovetails nicely with evidence for the ‘uncanny valley’ effect in human psychology: robots that don’t resemble humans at all aren’t unlikeable, but as they become more humanlike, they are viewed as more and more unlikely; then they eventually resemble humans enough that their likeable rebounds. As a couple AI researchers discovered:
[As] faces become more human than mechanical, they began to be perceived as frankly unlikeable. Finally, as faces became nearly human, likeability sharply rebounded to a final positive end point […] although the most human-like robots may be more likeable […] they may occupy a precarious position at which small faults in their humanness might send the social interaction tumbling […] the Uncanny Valley is a real influence on humans’ perceptions of robots as social partners, robustly influencing not only humans’ conscious assessments of their own reactions, but also able to penetrate more deeply to modify their actual trust-related social behavior with robot counterparts[7].
The solution to human-android interactions may to make androids more humanlike; but there is also the issue of what to do when in the lower parts of the uncanny valley. If androids with enough cognitive sophisticated have comparable moral standing to humans, it would be a moral tragedy to mistreat and discriminate against them on the basis of how they look. And even putting aside the uncanny valley, humans face difficulty adjusting to social interactions with androids. Just knowing an individual is an android will likely be enough to put us on edge; even if we can adjust to regular interactions with androids, this adjustment will likely be slow. We then face a challenge treating androids fairly in the meantime.
We aren’t blameworthy for our psychology, any more than we are blameworthy for our height. Our distrust of androids is deeply-rooted in evolutionary processes that will likely be difficult to overcome. And if lack control over something, it is difficult to see how we could be culpable for it. As the German philosophers, Immanuel Kant, writes,
[If] the moral law commands that we ought to be better human beings now, it inescapably follows that we must be capable of being better human beings[8].
Kant is stating a principle philosophers dubbed ‘ought implies can’: to say that we morally ought to do something implies that we have an ability to do it. It makes no sense to claim that humans have a moral obligation to solve global hunger by snapping our fingers. And by the same token: we cannot claim humans morally ought not to discriminate against androids: it appears we are incapable of this because of our psychology and evolutionary history. However, one way we can fulfill our moral obligations to androids with similar moral standing to humans is by an indirect route, but not the only avenue: weak affirmative action for androids.
Even putting aside issues of justice in the debate over weak affirmative action for androids, employers would have a further reason to prefer android candidates over equally qualified human ones: they likely overcame a great number of obstacles to even be in contention for the job in the first place. If candidates A and B look equally qualified on paper for a position, but candidate A had to overcome greater obstacles to gain those qualifications, then candidate A would likely be better at the job than candidate B—to overcome such obstacles, candidate A would have to be that much better as a candidate. As Dan Moller explains in similar cases when faced,
with a pick of accountants at a firm, sound epistemology overwhelmingly suggests barreling past attractive, polite workers and urgently seeking out the ugliest, shortest, most boorish one available[9].
There is empirical evidence showing that factors like attractiveness and height impact people’s judgments about merit and ability to a major degree[10], and these qualities are often, though not always, irrelevant to one’s job duties and should be discounted. Once we discount them, it is clear that job candidates lacking these qualities likely are better qualified for the job than their more outwardly appealing competitors. The same applies to androids: not only are issues of fairness at play to suggest weak affirmative action may be justified—when the job candidates are equally qualified, but some are more disadvantaged—there are practical reasons to think the disadvantaged, but otherwise comparable, candidates are better qualified than their counterparts. In such cases, weak affirmative action for androids may be justified. And there is another reason that may support weak android affirmative action: we have stronger moral obligations to androids, than we would otherwise, due to the fact that we caused them to exist, just as one has stronger moral obligations to one’s children than they do to other children for the simple reason that one brought their children into existence. In that sense, androids would be no different: our moral obligations, whatever those are, are stronger to them than would otherwise be due to our creating them.
[1] Christopher W. Morris (2011). The Idea of Moral Standing. In The Oxford Handbook of Animal Ethics, edited by Tom Beauchamp and R. G. Frey. Oxford University Press, p. 262.
[2] Louis P. Pojman (1998). The Case Against Affirmative Action. International Journal of Applied Philosophy 12 (1): 97-115.
[3] David Boonin (2011). Should Race Matter? Unusual Answers to the Usual Questions. Cambridge University Press, Ch. 4.
[4] Alan Goldman (1979). Justice and Reverse Discrimination. Princeton University Press, p. 164-65 (my emphasis).
[5] This doesn’t bode well for possible future societal prosperity: societies with high levels of interpersonal trust tend to be far more prosperous than societies with low levels of interpersonal trust. See David C. Rose (2018). Why Culture Matters Most. Oxford University Press.
[6] Takashi Minato, Michihiro Shimada, Shoji Itakura, Kang Lee, and Hiroshi Ishiguro (2005). Does Gaze Reveal the Human Likeness of an Android? Proceedings of the 4th International Conference on Development and Learning, p. 106-111.
[7] Maya B. Mathur and David B. Reichling (2016). Navigating a Social World with Robot Partners: A Quantitative Cartography of the Uncanny Valley. Cognition 146: 22-32, p. 30-31 (my emphasis).
[8] Immanuel Kant (1793/1999). Religion Within the Boundaries of Mere Reason. Translated by Allen Wood and George di Giovanni. Cambridge University Press.
[9] Dan Moller (2013). The Epistemology of Popularity and Incentives. Thought: A Journal of Philosophy 2 (2): 148-156, p. 152 (original emphasis).
[10] The lifetime wage premium for being above-average in attractiveness is estimated in the hundreds of thousands of dollars. See David Hamermesh (2011). Beauty Pays. Princeton University Press.