Many people who discuss the future of work in a world of AI talk about using AI to “augment and not replace” humans. Indeed, it’s one of the core principles of the Stanford Human-Centered AI Institution (HAI): “AI should be designed to augment human capabilities, not replace them.”

I’ve always found the distinction between “augmenting” and “replacing” human capabilities unclear. Partly, this confusion arises because these terms can be interpreted differently depending on context — and often, augmentation and replacement can really be the same thing. I’m a fan of examples to help make things clear, so let’s take a look at two.

Marketing

Perhaps a simple example is the generating of marketing assets — something many companies are already using AI for today. Whether it’s as simple as getting help with website copy or as complex as using AI video generation for TV ads — many (if not most) companies are using AI in some part of their marketing creative process. But does this count as augmenting or replacing?

Since humans are still involved in the process, there is a clear case that this simply augments those people. I think that’s a fair take. On the other hand, without AI there would have been more people involved in those processes. So, one could argue those people have been replaced.

Ultimately, any time we augment a human, we reduce the number of humans required to complete a given amount of work. This is why I think the augment vs replace distinction doesn’t make a lot of sense.

Fighting Fraud

Another way to look at the augment vs replace statement is that we shouldn’t replace human judgement with AI systems. That’s a little different than trying not to replace individual humans. Let’s think about this in the context of a fraud-fighting example.

Fighting fraud is actually something many financial institutions leverage AI for today. Do they fully outsource decisions to AIs in these instances, or do we just use the AI to augment human judgement? The answer is often both!

For instance, when AIs mark a transaction as very low likely to be fraud, often the transaction can be completed with no human review. This is an important case to remember when debating this topic because people greatly benefit from the speed of decision-making enabled by the AI. Credit Cards can’t work without this sort of system — imagine waiting in line at Starbucks for someone at Chase to review your transaction before it was approved!

On the other hand, often when the AIs flag a higher-risk transaction, there is often a manual review process. For larger loans (think car loans or home loans), this may just mean being asked for an additional document or getting on a call with a representative. In the credit card example, you might be able to text back that the transaction is approved or call up the company to validate the purchase.

In those instances, we have put the human back in the loop and are using AI to augment their decision processes. And this makes sense. But the blanket idea that we shouldn’t replace human judgement with AI doesn’t make sense — we’d all be stuck in credit card lines all day long!

It Depends

And unfortunately, that brings us to the ending point of many discussions about AI — “it depends.” Putting AI in the real world involves a series of choices and trade-offs, and there really aren’t simple answers to any of them. The right answer truly does depend on circumstances and details.

But I do think that Stanford’s HAI has the right general principle — we should keep human beings at the center of our decision-making about those trade-offs. We should be thinking about the small businesses that can make their first video ad thanks to generative AI. We should be thinking about the customer stuck in line at Home Depot waiting for approval. And we should also be thinking about the customer misidentified as a fraudster looking for a way to set the record straight.