Against Vibes Part 2: Ought You Use a Generative Model

:: ethics

Since the wide-spread availability and forced deployment of generative models, people have argued about the ethics using them. Many arguments have been presented to argue that they’re bad: they use too much electricity, boil the oceans, massively infringe on copyright, put people out of work, and generate slop. And therefore, you are ethically obliged to not use them.

I sympathize with the fundamental conclusion: that there is something ethically bad about the current situation. But I can’t really take these arguments seriously, for two reasons: (1) if you look into any one of the arguments, the details are (shocking) a little more complicated (2) it’s hard to judge the validity of an ethical argument when no one is willing to make explicit their system of ethics.

In this post, I’m not going to argue about whether generative models are useful. I came up with a model of usefulness in the previous post, which doesn’t say generative models are useful, but gives you a framework for deciding whether one is for a given task and user. For the moment, though, I’ll assume for the sake of argument that there is a use for generative models, so now we need to answer a different question.

Ought you use a generative model? I don’t mean is it good for a specific task; that’s an is question, about whether it is useful. I mean ought you use a model: is there an ethical argument for or against you using generative models, categorically. I’m not going to spoil the ending, because the argument is more important than my conclusion.

An Ethical Framework

It is slightly infuriating to me that people will argue about ethics without interrogating their own ethical axioms, or those of their interlocutor. It makes so many ethical arguments like a series of randomly generated sentences.

The dominant ethics pretty much everywhere is Utilitarianism. Utilitarianism makes ethical decisions by not supposing apriori what is good and bad, but instead that everyone seeks some generalized notion of utility. Maybe you get utility from eating delicious baked goods, but you lose utility from having to work a soul-crushing 9—5 job. An ethical action, under this framework, is the one that maximizes utility, or in negative utilitarianism, minimizes negative utility.

This is a stupid ethical framework. It falls prey to various paradoxes, like stupid trolley problems and the utility monster who derives more utility from harming you than you lose by being harmed. It presupposes that “utility” is quantifiable, and ignores standard problems that arise in optimization problems, like that to optimize one variable you must both ignore and therefore minimize other variables. As a consequentialist framework, it falls prey to many of the standard problems of trying to predict the consequences of one’s actions, but as a framework that pretends to be very mathematical, it carries these problems out to infinity. This leads to bonkers philosophies like Effective Altruism and Longtermism, who start to believe crazy things like that to maximize utility we must rush headlong to develop a general purpose artificial intelligence that is the utility monster and maximize its utility.

But if I’m honest, I think my fundamental problem with it is that it exists to justify harm—to say, “yes, actually doing harm is the ethically correct choice”. That feels wrong.

So let me start with my ethical axioms, my framework for making ethical decisions. There are many like it, but this one is mine.

I’m some kind of hedonist: I believe that which is enjoyable is ethically good, and that which is harmful is ethically bad. I believe one should take actions that do not cause harm, and ideally take actions that are also enjoyable.

I believe one is ethically obligated not to cause harm, but has no positive obligation to cause pleasure; it is actually a kind of negative hedonism. But if you can cause pleasure, that is good.

My hedonism is stratified: base (Type 0) good things (e.g., deriving pleasure from eating a pastry) are good; base bad things (e.g., intentionally causing harm to an innocent being) are bad. Deriving pleasure from a (Type 0) harm, sadism, is a (Type 1) pleasure that is bad. This is ethically distinct from deriving a (Type 0) pleasure merely at the expense of something else, like eating a cruelly harmed animal, which is also bad but for a different reason. I have notes on further levels somewhere, but I think these suffice for the purposes of my arguments.

I reject that “good” and “bad” are quantifiable. I cannot decide, ethically, whether one harm outweighs another. Harm always outweighs pleasure; there is no “quantity”. “Is it more ethical to kill one man to save two men?” what a stupid question.

Instead, my framework is incomplete: some actions are neither ethical nor unethical. For example, if you are required to act and any choice you make causes harm, this framework does not help you, except in one way: it decides no action you take is unethical, since there is no ethical choice. You are only ethically obligated to act in a particular way when you have a choice between an action that causes harm, and an action that does not cause harm. “Should you cause harm to save a life?”, well, if failing to take action results in harm, and your action results in harm, then either action is equivalent under my framework. I’m happier to admit ignorance than to admit paradoxes.

I also subscribe to some kind of bounded rationality. It is entirely pointless to try to consider all consequences of your actions out to infinity, taking into account all possible information. I am only obligated to act on information I can reasonably be expected to have, and with respect to consequences that are reasonably foreseeable. It also entirely pointless to try to consider all possible actions one could take; you aren’t actually infinitely capable. As things get more and more complex, and less and less certain, I say: who knows, try your best.

So there’s my framework: a stratified anti-quantitative bounded negative hedonism. Now that we have a framework, lets start looking at ethical arguments about generative models.

The Resource Argument

One argument against generative models is that these models “use too many resources”, typically electricity or water. So what are the ethics of this argument?

Using electricity, in itself, does not cause harm. There are many examples of using electricity bringing joy and happiness to people; those uses are good. There are certainly uses to which electricity can be put that are unethical: I could electrocute you, causing you harm (unless you’re into that). But that’s not the electricity causing harm; that would be me causing harm.

The argument about electricity is probably about the effect on climate change, since increased electricity use probably means increased carbon emissions, which cause harm. So using a generative model, and therefore increasing electricity use, and therefore maybe increasing emissions, may cause harm.

This is not a strong argument. The source of electricity matters, for one. If all or most electricity came from renewables, there would be no harm. Increasingly, renewables are becoming a dominant mode of electricity generation.

Given the uncertainty involved, I don’t think this on its own creates an ethical obligation on individuals to use or not use generative models.

Worse, the argument is contingent on the technical capabilities of generative models. If they become more efficient, do they become ethical?

There are other other forms of this argument, but they aren’t really about resources. For example, one might argue: they use more electricity than alternatives and produce worse results. I’ll call this a “slop argument”; it’s not really about resource use, but about capabilities. This implies it’s okay to use more electricity, and therefore possibly worsen climate change, as long as the generative model is good enough. Any such argument is doomed to failure on many fronts, such as the booster’s favourite argument that the models are going to get better and better until they eventually achieve AGI. However, its fundamental flaw in my mind is that it’s utilitarian: it justifies harm.

There’s one other form of this argument, which shifts the conversation from the mere resource cost to the cost of data centres or the industry as a whole. This is also not really a resource argument: I’ll call it “the power argument”, and address it separately. Again, a data centre using electricity doesn’t necessarily cause harm, and even if it did, it doesn’t tell us whether an individual ought to use a generative model, since their action has no effect on the deployment of new data centres and therefore the increased resource usage.

So in short, I don’t think there is an ethical imperative to not use generative models because using them increases resource consumption.

The Intellectual Property Argument

One argument I find super weird is the intellectual property argument: training generative models has “stolen” tons of work, or reproduces copyrighted work. This is particularly weird when it comes from people normally don’t give a shit about intellectual property, consuming much of their media via pirate sites or torrents.

Worse, it’s not even clear this is infringement of intellectual property rights. Downloading copyrighted material isn’t a violation; only reproduction is. It’s very clear that some models will reproduce stuff from their training data, so you might argue there is infringement there, but it’s not clear. Sometimes reproduction is “fair use”. Reproduction for the purposes of commentary or criticism is usually fair use. The argument for and against has to do with demonstrating economic harm to the rights holder, which might be difficult. Do you think economic harm has been caused to book publishers as a result of training generative models? I kind of doubt it, but maybe. And it’s not what generative models are supposed to be doing, which makes the argument harder. You can genuinely argue that reproduction is a bug to be fixed.

Even if it were infringement, while infringement might be illegal, that doesn’t make it unethical. Who is harmed if I pirate all of The Fast and Furious movies, or if I train a model that reproduces GPL source code?

Ignoring whether or not it even is infringement, intellectual property itself isn’t necessarily good. Why should is it ethically good to give a person a complete monopoly on the production of something? And for copyright, that monopoly is crazy. 70 years after the death of the original author? What kind of creativity does that inspire? Why should we give such power to one individual, to legally go after anyone who wants to compete at the production of a good idea whose author is long dead? There’s nothing ethical about this, and in fact, I firmly believe intellectual property as it is currently implemented causes significant harm.

So, no, I don’t think you have an ethical imperative to avoid generative models because they might infringe copyright.

The Slop Argument

A very common complaint, and sort of argument, is that generative models produce slop. That is, they produce output that is not of high quality; it doesn’t meet the requirements or goals of the output. In technical work, it doesn’t meet engineering standards or design requirements. In creative work, it fails to express anything of artist interest or intent.

This is not really an ethical argument, but a description of technical capability. If the models improve, do they become ethical to use? The mere slop argument would suggest they do!

It could be an ethical argument, if you form it as a utilitarian argument: they produce more harm than the produce utility. This would require us to admit they produce harm, and start to justify the harm. But I’ve already rejected utilitarianism.

If not careful, the slop argument also attributes the cause of the harm to the generative model. But a model does not, despite the fevered branding of the industry, have agency. It cannot produce something that causes harm by itself. If I go stopper all your sinks and leave the water running in your house, it wasn’t the water that caused you harm.

If the output is poor and you decide to use it in a situation that causes harm, you caused harm. For example, if you submit a thoughtless research paper that wastes reviewer time, it was not the use of the generative model that wasted reviewer time, it was you. If I’m forced to reject garbage generated pull requests, it’s not the generative model’s fault, it’s the fault of whoever setup the model or agent and caused it to go out spewing slop.

This argument might be a good argument for not allowing generative models to be used in some settings. If the balance of probability is that the output will cause harm compared to an alternative, best not to use a generative model. This quickly becomes a technical argument: whether or not the quality of the output can be assured. Perhaps it can’t, so a conservative approach is needed. But this isn’t an ethical argument categorically forbidding the use of generative models.

So I don’t think there is an ethical imperative to not use generative models because some of their output is not of high quality.

The Employment Argument

One argument is see from time to time is: generative models are putting people out of jobs, and therefore you shouldn’t use them. As statement, this is a confused argument in many ways.

For one, an individual’s use of a generative model probably doesn’t put anyone out of a job.

For two, it’s not clear to me that this is necessarily harmful. “Not having a job” is not necessarily harmful, and so this debate quickly devolves into questions about unemployment, about whether new jobs created in place of old jobs, etc. This has nothing to do with an individual’s use of a generative model, and much more to do with the economic systems we’re embedded in, and all the particulars matter.

For three, it’s not obviously true that generative models are putting people out of work. Many CEO’s are claiming that’s why they’re firing people, but there’s plenty of evidence to suggest that’s either a lie to make layoffs more acceptable, or a lie to convince shareholders that the trillions in investments are paying off.

Finally, the generative model didn’t decide to lay anyone off and therefore make their continued existence in a society in which having a job is a necessary precondition to continued survival. Some profit seeking business boy did.

So I don’t really think there’s an ethical imperative to not use generative models based on layoffs.

The Power Argument

There’s only one argument I buy at all, and I don’t see it very often.

Even phrasing this argument requires care: we have to separate the individual use of a generative model from the “AI” industry.

Here, I use “AI”, which I’ve been loathe to use until now, very explicitly. “AI” is a brand. It is not a technology. It is a marketing term deployed to convince people that the underlying technology (most recently, generative models) is more capable than it is—that the technology is “intelligent”.

The power argument goes like this: the “AI” industry is accumulating power at the expenses of others, possibly doing harm in the process, possibly using that power to do harm. Therefore one should not use generative models.

This argument… doesn’t follow. Not as is, anyway.

The “AI” industry is concentrating power in the hands of capital, and removing it from labour. The claim of the industry is that tasks previously only able to be done by highly skilled people can—sort of, in some cases—be done by generative models. This would allow someone with a lot of GPUs and a ton data to exercise power over people they previously couldn’t. And the industry definitely want to use that power, and obviously don’t care who they harm in the process.

In fact, all the previous arguments can be augmented into a power argument, and they start to make more sense.

The problem with resource usage isn’t about generative models, but about the “AI” industry. The “AI” industry is spinning up data centre after data centre, without regard for resource use. They sometimes deploy carbon-intensive generators to meet capacity, or consume already scarce water because they have the power to take it from others who need it. These are direct harms that the “AI” industry is engaging in.

The problem with slop is not that one can produce poor quality work. That is not even a new problem. The enshittification process was started long before generative models reached their current capabilities. Long before large language models, others forms of text generation were used to mass produce garbage webpages for small amounts of profit. The problem is that the “AI” industry is wielding an immense amount of power to force “AI” use that lowers the quality of work into new areas. They’re replacing web search with “AI”. They’re replacing customer service with “AI” (chatbots were already common, but this makes them next to trivial to deploy). They’re forcing engineers to use “AI” to generate software. They’re deploying “AI” to make decisions about who to investigate for crimes, who to arrest, who to kill.

The problem with employment isn’t that generative models can take jobs, which I doubt, but that the “AI” industry can replace the output of skilled workers with poor quality output created by generative models. The “AI” industry can replace labour with capital. They’re not even particularly coy about this; some of them have gone on record about this goal. The goal is to gain power over labour, and exercise that power for profit.

Interestingly, sometimes I see the opposite of this argument: people arguing that somehow generative models democratize skilled labour. This is madness. That could, possibly, maybe, be true if these models were tiny and could run on lower powered devices that everyone had access to. Even then, I’m skeptical, as so far I don’t believe they can be used unless you already have the domain skills required to do the work in the first place.

But ignoring my arguments about utility, it’s certainly not true now that generative models give labour power. The only useful ones burn billions of dollars merely to operate, and required almost trillions to reach that state. The only useful models are owned and operated by the “AI” industry. If you want to use them, you have to go to the “AI” industry. That’s not democratization of anything; that’s the “AI” industry having power over you.

So the “AI” industry definitely has power, and they use that power to cause harm. So is there an ethical imperative to not use generative models, based on the power of the “AI” industry?

Well, these harms are not caused by generative models. They’re caused by the “AI” industry, not the underlying technology. They do not go away if you stop using generative models.

But your action to use a generative model may give power to this industry, and thus, cause harm. So let’s consider individual actions.

The Individual Action Problem

The main source of the “AI” industry’s power is economic power. They promise to be able to reduce labour costs, and they need a lot of money to make that a reality (or so their pitch goes).

I’m skeptical that mere use supports the “AI” industry in a monetary sense. The “AI” industry is currently supported by an ungodly amount of investor money and debt, not by the small amount of revenue it brings in. You using a generative model, and even paying a subscription, does not provide support for the power of this industry.

I want to be clear that I’m not rejecting individual action. I think individual action is important. If your action causes harm, no matter how small, I consider that unethical.

I’m saying this particular individual action, using a generative model or even subscribing to a generative model, does not cause harm at all. This is not the ethical problem, because it does not contribute to the power of this industry. If every human on earth boycotted the industry, it would still have as much power as it currently has. In fact, given that most subscriptions appear to cost the “AI” industry money, they might be better off if we boycotted them. We could debate this; I don’t think it’s clear. But let’s accept my premise for a moment.

The money you give the “AI” industry is not the only way to give the “AI” industry power, and taking these other actions causes harm. Using generative models uncritically gives the “AI” industry power, supporting the claims they make and the transfer of power to them. Reporting on generative models and the “AI” industry uncritically gives their claims credibility, giving the “AI” industry power. Enabling their wide-spread deployment, in addition to any harm caused by slop, gives the “AI” industry power over the context in which they were deployed.

In addition to being a hedonist, or perhaps because of it, I’m also an anarchist. I am very against power and hierarchy. I think the main thing they do is cause harm, and we should seek to reduce power and hierarchy as much as possible.

I think there is plenty of evidence that this technology, in the hands of this industry, is causing harm. I don’t think certain individual uses of generative models are unethical, but any action that empowers the “AI” industry is certainly unethical.

So what should you do?

I’m normally pretty reluctant to give direct advice on actions you should take. My standard disclaimer is: all advice is one person’s opinion.

… But we’re several thousand words in and I’ve made my ethical arguments as clear and precise as I can, so strap in buddy while I tell you what to think and how to act.

There is no ethical imperative to not use a generative model; that depends on whether the particular use will cause harm or not.

There is an ethical imperative to deny “AI” (the industry) power.

So how does one deny “AI” power? Well if I knew that, I assure you we wouldn’t be in this mess. But I can tell you some actions I’m taking.

First is education.

I’ve given talks on “prompt engineering” and how it’s not engineering, written these blog posts, and remain engaged in the “AI” discourse despite hating it.

I want people to think clearly about this technology, this industry, and ethics. I want people understand the technology, which is not magic. I think this is necessary to refute the lies this industry is telling. I think it’s necessary to demonstrate exactly what the technology is and is not capable of. I want people to be able to separate out different concerns and arguments so they can refute lies and nonsense arguments others are telling.

Will this work? I don’t know; maybe.

Second is policy work.

At UBC, I’ve tried to inform policy around the use of generative model. I’m trying to restrict the use of generative models around the university. Some of that is public, some is not. Some has been successful, some has not.

In my own lab, I am in a position of power (ugh), and in that role I try to create sensible policies. I don’t outright forbid the use of generative models; I’m against my own power, and not sure using a generative model is necessarily problematic. I do make clear that the user is responsible for the use of the generative model. Every line of every artifact, if the user cannot stand by it, cannot justify a design decision, cannot explain something, then they, not the “AI”, have failed.

Third is boycotting.

As I said already, I’m not sure that mere use is unethical. It’s certainly not if you have no choice. Even if you have a choice, mere use may not empower the industry.

Still, I, largely, refuse to pay money for this technology. (I spent $10 on various experiments.) I don’t think this has a direct effect, but I think it’s an important line to draw to do what I can to deny “AI” power. I try to avoid technologies and companies and products that adopt “AI” and support this industry.

I think boycotts have indirect effects as well as direct effects. Refusing to engage can cause conversations, it can change minds. That will probably have more effect than denying these companies a few dollars a month.

I have no qualms about using local models. The ones I’ve used are little more than toys. We have some running in the lab that I haven’t tried; maybe I’ll build a machine with a proper GPU and give those a shot.

I do use some industry models; as a faculty member, I can access some pro models for free. I don’t use them very enthusiastically, but I also don’t worry too much about using them. I think using them is necessary to understand them and to educate others. And it marginally costs the industry money when I use them, which is a small bonus.

Fourth is sabotage.

I have deployed lots of “AI” countermeasures, both for training and inference. I don’t know how effective these are, but I deploy them anyway. I should probably go read some activism books, such as How to Blow Up a Pipeline: Learning to Fight in a World on Fire, to find more effective strategies.

I’m not sure any of this will be effective, but I’m ethically obligated to do something.