Large Language Models Reduce Harm
Chatbots do more than answer questions and enable cheating in school. They also reduce transactions costs and harm.
Please like, share, comment, and subscribe. It helps grow the newsletter and podcast without a financial contribution on your part. Anything is very much appreciated. And thank you, as always, for reading and listening.
Because they can make people better off than they would have been otherwise—by reducing transaction costs—large language models (LLMs) like Grok and ChatGPT can improve lives by reducing harm. Let me unpack that claim by drawing insights from economics (on the nature of transaction costs) and philosophy (on the nature of harm). But first, it’s worth doing a quick gloss on how LLMs work.
Large language models (LLMs) are programs that allow users to prompt them for content—e.g., ‘Please explain why the Roman Empire fell using bullet points and examples.’ Trained on massive amounts of text to learn language patterns, LLMs use that training to predict strings of text that will (often, but not always) appropriately respond to a prompt. They’re fairly fluid and flexible in the tasks they can perform: summarizing and organizing large amounts of information (e.g., a legal document), writing poems or songs in a specific voice (e.g., Compose a poem for my wife in the style of Shakespeare), devising a recipe from leftovers, and nearly anything else. (One of the main limits to LLMs is the imagination of the user.) Their outputs are often—but not always—on point and accurate. Quality varies from model to model, and their results should still be checked for accuracy.
To see how LLMs can reduce harm by lowering transaction costs, it helps to define transaction costs. Economists describe them as the time, effort, and resources required to carry out the purchase or acquisition of goods and services—that is, the cost of effecting a transaction. Shopping malls, for instance, reduced transaction costs relative to the older model of traveling to scattered shops across a region. This reduction in transaction costs, all else equal, improves the consumer’s situation by freeing up resources that can be spent elsewhere on something more valuable to her.
What about harm? The standard view among moral philosophers is the counterfactual theory of harm. On this account, to harm someone is to make them worse off than they would have been otherwise. For example, stealing someone’s wallet harms them because it (presumably) leaves them worse off—poorer, more stressed—than they would have been had their wallet not been stolen. And so, the theft is a harm. As the moral philosopher Craig Purshouse explains:
In order to determine whether a particular course of conduct is ethically permissible it is important to have a concept of what it means to be harmed. The dominant theory of harm is the counterfactual account, most famously proposed by Joel Feinberg. This determines whether harm is caused by comparing what actually happened in a given situation with the ‘counterfacts’—i.e. what would have occurred had the putatively harmful conduct not taken place.
So, with a handle on transaction costs and the counterfactual theory of harm, here are a few examples of how LLMs can reduce harm by lowering transaction costs:
Translating legalese: LLMs can rephrase the complex language of a bill into relatively accessible, layperson-friendly terms. While not a substitute for a lawyer, this can be a lifeline for someone without legal training who needs basic understanding. This reduces the cost (in time, energy, and stress) of digesting complicated information. To be fair: LLMs aren’t perfect here, but they compensate with near-perfect availability, speed, and a low cost of access. (Prompting LLMs is a skill—yes—but easier to learn than legal jargon.)
Surfacing relevant information: LLMs can scan a vast quantity of material and highlight relevant bits for a report or survey. For instance, journaling becomes more valuable when you can feed your journal to an LLM and ask it about trends, patterns, and key passages. Unlike simple keyword searches, LLMs can (at least to some degree) understand context, emotion, and narrative arcs. That’s a substantial reduction in search costs.
Drafting literature reviews: LLMs can produce rough-draft literature reviews on specific topics—something like a custom Wikipedia page—without the user having to sift through countless academic articles. This reduces the burden of compiling information and lets users focus on higher-priority tasks.
Devising a recipe from the fridge’s leftovers: LLMs can generate a recipe from a list of whatever ingredients are sitting around in your fridge or pantry. This reduces the transaction costs of having to figure out what to cook. Instead of spending time Googling ideas or defaulting to takeout (which carries its own costs in money, time, and health), the user can offload the mental labor to the LLM. The result may not win a culinary award, but it’s fast, functional, and typically good enough to avoid wasting food or ordering out.
LLMs make (sometimes silly) mistakes. Sometimes they are shallow and hallucinate. However, they continue to improve. They’re available nearly all the time, relatively affordable (for now), and easy to use. Their ability to reduce transaction costs in a wide range of cases—and thereby reduce harm—is underappreciated.