Let’s talk about the agile morality of AI so you can determine how it fits within your own ethical spectrum—at least for now.
Before reading this, you should know that, at least part of the time, I’m a technical writer. A good part of the revenue stream for my consulting practice comes from explaining tech stuff so that other people can use that tech.
That’s also an area that generative AI is starting to encroach on—in fact, it’s happening enough that I’ve had to sign agreements with some of my clients promising that nothing I sell to them is the result of using generative AI. So I have, as they say, “skin in the game” and, at the end of this post, I’ll put my money where my ethical mouth is.
It’s not my intent in this very long post to tell you what’s right or wrong—you’ll have to decide that for yourself. I’m just providing a way to think about the issues … and you may not agree even with that (you may just use this post as way to come to your own way of thinking about this issue). But, I think, the ethical use of AI is a problem worth coming to a conclusion about.
Now, having got the personal stuff out of the way, there are two sets of ethical issues here: One set is typical of any new technology; the other is what’s special about AI. Let’s look at the typical stuff first because it helps us understand what’s special about the morality of AI.
There are a number of ethical considerations that apply to AI that also apply to every technology:
And so on.
I’m not suggesting those issues are either easy or trivial. Ideally, using AI will be equitable, sustainable, responsible and all the rest (this list is probably not exhaustive). But it will probably turn out that AI isn’t ideal in one—or even any—of those categories. No technology has been, after all.
We will have to, then (as we always have), take a utilitarian approach. That means asking: “As long as there is no cost that is so objectional that no benefit could justify it, do the benefits of this technology outweigh the costs and how can we regulate its use to reduce the costs?” We have some examples that we can use to help us understand what we have done in those situations.
We need to recognize, for example, that at any moment those “cost and benefit” questions are hard to answer because we typically don’t know what we’re talking about, let alone what the moral implications are. As Gates’ Law points out, while we tend to overestimate the impact of technology in the short run, we always underestimate technology’s impact in the long run (the name of this law may be misattributed).
Internal combustion engines (ICEs) are a good example. ICEs revolutionized the structure of transportation. All on their own, ICEs created winners and losers: There is no doubt that people who do not have access to cars are disadvantaged compared to those who do. (Alternatively: People who have cars are privileged over those who don’t—though more people object if the problem is phrased that way.)
But it gets worse because ICEs are part of the triumvirate (ICEs, steel, elevators) that created our cities where more than half the people in the world currently live. It’s debatable whether we treat everyone in our cities justly (think about the legal status of the people in your city who are unhoused).
At best, for any technology, we guess at what will happen and start to preemptively formulate the ethical questions to see what answers we’ll get. But, realistically, we don’t ask the right questions until we get there. I’m reasonably confident that when Sam Otis created the first safe elevator, no one considered the “ethics of elevators”: How elevators would enable the concentration of people that dominate decision-making in our cities’ downtown cores. But here we are.
The result is that we keep adjusting the way we use technology to try and get closer to … well, if not a morally good, at least a “less reprehensible” answer to the way cities should support people. We frequently get some of it wrong, regret some choices and incur enormous costs trying to get closer to “right.” The only good news here is that we can say that we’re more aware of the moral implications than we were in the past (especially in terms of equity).
It’s an agile approach to morality: Make no decision until you have the problem.
There’s an old joke that’s relevant here of two people watching a construction site where a frontend loader is digging a basement. One person says, “If it wasn’t for that machine, there would be a hundred people in there with shovels.” After a pause, the other says, “Or a thousand with teaspoons.”
We accept such displacement of the hundred or thousand gainfully employed human beings because we couldn’t afford to build this structure if we had to pay a hundred people to build it (or we wouldn’t have been able to pay them a living wage—we would have to, as we say, exploit them). We accept that because the building will be affordable to many more people than the ones who didn’t earn a living by working on it. And, plainly, I’m ignoring whatever suffering was incurred by those people who expected that they would be able to earn a living by digging basements.
Historically, lowering the costs so that more goods and services can be delivered to more people has bought technology a lot of moral forgiveness. Henry Ford’s assembly line, for example, lowered the cost of a car and raised the wages of the employees, to the point where the people on the assembly line could afford to buy the car. We decided that was a good thing (at the time, at any rate, which is the best we can do).
Another example: The Ocado grocery store chain has a new packing plant in the southeast of London that was designed from the ground up to be worked by machines, not people. The staff call it “The Hive.” The plant enables online ordering and delivery using orders of magnitude fewer people and at a lower cost than a purely manual system. Preparing those orders would normally have required the intelligence of many people. Instead, that facility contains either 3,300 robots or one (your choice). We still seem to be OK with that.
In theory, AI should be getting the same level of forgiveness as it displaces people. Why doesn’t it?
Answer: Because generative AI is getting closer to passing the Turing test.
This is where, if I haven’t already lost you, I’m probably going to: Hannah Arendt’s book The Human Condition can be helpful here. Arendt divided the “things people do” into three areas she called work, labor and action:
Technology traditionally is forgiven displacing people in the area of labor, and that muddies the ethical waters around AI because AI operations also displace people—but in the area of work (and, potentially, in the area of action).
For example, software developers (like myself) know that our greatest problem in using any technology is the lack of documentation that tells us how to use the technology. Unfortunately, companies delivering and creating technology know that, if they take the time to both produce and keep up to date all the documentation that developers need, they won’t be able to compete with other providers. Stack Overflow, community forums, Udemy and my career as technical writer/instructor live in this gap.
An AI documentation tool could reduce the costs of producing all the documentation that developers need to a cost so trivial that all companies could afford (e.g., a tool that generates the documentation a developer needs in response to any individual question as, for example, Google currently does for searches). The results from reducing the cost of creating software is, as Gates’ Law points out, literally incalculable in the long run. And that’s before we factor in the drop in costs of software through having AI produce (or help produce) the software we need. But this is now displacing people from work—not labor—and we feel differently about that.
This is what is unique about the ethics of AI: that it displaces humans from work and, potentially, action.
To put it another way: The concern in AI in the area of action isn’t that robots will turn on us (see: Terminator). It’s more fundamental: That robots will take over governing us. Even if a government of robots turned out to be benign or even nurturing, we feel it’s wrong for us not to govern ourselves. We have, on occasion, accepted (even welcomed) tyrants, but we’re not willing to accept robots.
But having AI govern is too far into our future for us to consider—we don’t even know the right questions yet. Let’s just consider the problem in front of us: the area of work. Let’s use creating art as an example: Is having AI produce “art” morally unacceptable in a way that the Ocado plant and the frontend loader are not? Will accepting AI-generated art displace human artists in a way that we can’t forgive?
This issue isn’t completely new. We have continually questioned the use of technology in art: Prints, which create “art for the masses,” did not (and still do not) always count as art. Using cameras, it was claimed, would not produce art, and neither could using airbrushes or computers. There is still dispute whether “textile art” counts as an art, in part because machines are used in creating it.
Cameras did not displace painters, for example: Painting moved on to Impressionism, Expressionism and other varieties of art that don’t duplicate what the camera does. Cameras actually created whole new avenues of art—not just “photography as art” but paintings that look like photos (Gerhardt Richter, Chuck Close, photorealism in general) and photos that spin off painted art, at least part of the time (Jeff Wall, the Pictures Generation).
AI is different from previous technologies, however, because instead of enabling humans to create new kinds of art, there’s the possibility of replacing humans in creating art.
Specifically, the problem is: What happens when AI work and human work are indistinguishable? At that point, the history of technology in art doesn’t help us because we have something we’ve never had before—something that truly passes the Turing Test.
We’re not there yet (we can still tell the difference) and an agile morality says that, while we can speculate, we won’t be able to decide until we get there. But art is just one example of what we do that we can classify as work.
As I said, I can’t give you an answer on the morality of AI in the area of work. I can provide myself as an example.
Personally, I think humans do “thinking” as well as “thinking” can be done. Unlike building faster ways to transport things by changing technology, I think moving “thinking” from human’s wet works to silicon won’t result in better thinking. I also don’t think that having access to more information substantially improves thinking.
I don’t, in other words, believe in the singularity—at least as far as it means producing a “better” (or even an “as good as”) intelligence. A faster car is an example of technology in the area of labor; thinking moves us to work and action. I believe those categories reflect a reality that won’t be erased.
I may be being naïve.
I recently wrote an article about using AI to generate summaries of articles. The results varied from OK to wrong. I could live with AI producing those summaries (even when wrong) for three reasons: I think having summaries of material is valuable, it’s prohibitively expensive to produce those summaries using human beings and, I think, having some summary is better than no summary. Plus, I suspect the quality of the output will get better, though (as I noted above) I don’t think they will ever be as good as a human could produce.
That thinking extends to my careers in both technical writing and software development. T.M. Scanlon’s rule from What We Owe to Each Other (better known as “the book from the TV show The Good Place”) is relevant here: We should act on principles that the people affected by our actions could not reasonably reject.
That means that, if there are benefits to everyone in using AI in technical writing and software development, I can’t deny those benefits only because it will disadvantage me. Preserving my job is, I think, precisely a principle that others would reasonably reject. This reflects my own choices and protects me from being a hypocrite: Keeping buggy whip manufacturers in business would not, for me, be a reasonable basis for rejecting ICEs (global warming, on the other hand, would be).
On the same basis, I’m currently OK with AI-generated art. I think if more people have more art in their lives, that’s a good thing, even if some of it isn’t created by human beings (artists will move on). I have several, perhaps many, artist friends who disagree with me. I’m not sure they’re wrong and they might change my mind.
I might change my mind because my current decision is based on the belief that I can tell the difference between “AI art” and “human being art” (and, while I prefer the art that human beings make, others will not). If my ability to tell the difference disappears, my decision may change. I may still, for example, adopt some variation on the Butlerian prohibition from Dune: Thou shalt not make a machine in the likeness of a human mind. Until then, though, I’m good with it.
What can I say? It’s an agile morality.
Peter Vogel is a system architect and principal in PH&V Information Services. PH&V provides full-stack consulting from UX design through object modeling to database design. Peter also writes courses and teaches for Learning Tree International.