The strange and future political economy of AI
Future political activists will lobby for rights for android and AI romantic partners and family members, whether morally accurate or not
The future is hazy, and predictions are generally the pastime of fools. And yet, here, I want to indulge in a prediction about the short to longer term future, perhaps over the next couple decades, using insights from political economy: specifically that a major subset of voters will lobby, in the near to somewhat distant future, for laws to protect their romantic and familial relationships with artificial intelligence (AI), whether (and especially if not) AI has any moral standing—that is, whether AI is the sort of entity that is worthy of moral considerations, rights, and so forth.
Why do I predict that? To unpack this, we must first gather a couple puzzle pieces. Begin with the insight that when the cost or error is low, all else being equal, people tend to indulge in more epistemic irrationality—that is, holding beliefs and engaging in practices that lack factual or evidential basis. This insight applies to the political domain, where voters have little incentive to be informed for the simple reason that their single vote is highly unlikely to decide anything electorally, whether it is informed or not. And so, it makes more sense for an individual voter to use their vote and political advocacy, not to influence politicians or policy, but to secure the benefits of cooperation and solidarity by signaling fidelity to their political tribe. And better if the cause they signal makes less factual or evidential sense—better as a basis of reassuring others that one is loyal for tribal reasons, and not factual or evidential ones. As Dan Kahane explains,
Where positions on some policy-relevant fact have assumed widespread recognition as a badge of membership within identity-defining affinity groups, individuals can be expected to selectively credit all manner of information in patterns consistent with their respective groups’ positions. The beliefs generated by this form of reasoning excite behavior that expresses individuals’ group identities. Such behavior protects their connection to others with whom they share communal ties.
Next, consider that since individual voters lack the incentive to get things factually right, we should expect many of them to hold political beliefs, lobby for political causes, and so forth that have more to do with their tribal identity and personal interests. And with the rise of advanced AI—with the advanced generation of video, image, and text—alongside a loneliness epidemic, fewer children, and fewer marriages there will be a demand for socializing, or something near enough. There are upsides to fewer children and marriages, but there are downside too such as higher levels of depression, anxiety, and loneliness. People will seek out substitutes for social contact when they feel the need, just as people will search for suitable substitutes whenever there is a shortage of something they demand.
The more sophisticated AI applications and devices will take the place of spouses, children, and even pets for a subset of the population who either cannot or will not form human relationships to satisfy personal, emotional, and social needs. Perhaps AI relationships will be sufficient such that, for some people, the effort to form a relationship with a human won’t be worth the effort, just as fast food is often easier than steak and potatoes. There are already nascent industries that offer services like AI chatbots on Only Fans that ever more able to convincingly impersonate one’s favorite porn stars, along with the rise of the AI boyfriend industry where,
A growing number of women are seeking connection and comfort in relationships with chatbots — and finding their approximation of empathy more dependable than many human partners' support. […] These female AI users, flipping the stereotype of under-socialized men chatting with AI girlfriends in their parents' basement, are challenging assumptions about the nature of human intimacy.
And presumably, these relationship bots—for the lack of a better term—will become ever more sophisticated and cheaper over time, both lowering the cost of an artificial relationship and the comparative cost of a relationship with a human being. This will be especially true where the user of the relationship bot can modify and customize their AI paramour to suit their needs, to stay interested in the relationship, and so forth—though there can be obvious downsides here too, such as finding one is dissatisfied with a relationship bot that is too agreeable.
Because there would be a relatively low cost to voting, lobbying, and activism on behalf of AI rights and relationships, we should expect that there will be, if there isn’t already, a burgeoning political movement aimed at establishing such protections for AI and AI relationships up to and including legal protections for AI against sexual harassment (‘don’t hit on my AI boyfriend, please!’), legal protections for human-AI hybrid couples against discrimination, and even paid maternal and paternal leave for employees with ‘synthetic’ children. The claim here is not that AI political groups will succeed—though they may, if they are as organized as the NRA—but rather that they would be motivated to be politically active due to the low cost of their political beliefs if they are wrong, and the personal benefits such actions would secure: boosting their reputation with their political tribe, establishing a shared sense of identity that validates and affirms their novel relationship style, and so forth.
What makes this strange isn’t that AI couldn’t be deserving of moral consideration—there may come a day when they are—but rather that we should expect this kind of political activism and lobbying regardless of whether AI has moral standing, and especially if there is reason to doubt that AI is, in fact, morally deserving of such rights and protections. After all, if AI was clearly and reasonably deserving of legal and moral protections and rights, then holding such political views and lobbying for them would be a weak signal of tribal loyalty—one could easily hold such views merely due to solid arguments and evidence unrelated to signaling tribal loyalty.
The point here is not to single out anyone for their lifestyle or their political beliefs, but rather simply to predict that an AI rights and protections political movement, especially for people in relationships with AI (and the people who love them), could easily get off the ground in the near to somewhat distant future. If nothing else, this development is suggested by insights from political economy and human nature. And, in any case, the future interaction of AI and political economy should be exciting.