The real AI nightmare: What if it serves humans too well?

The real AI nightmare: What if it serves humans too well?
Hogs inside a farm in Cam Giang, Hai Duong province, Vietnam, on Saturday, June 11, 2022. Vietnam has collaborated with US experts to produce the worlds first commercially viable vaccine against African swine fever, a disease that has killed millions of hogs across Asia and pushed up global pork prices. Photographer: Maika Elan/Bloomberg via Getty Images
(Bloomberg / Bloomberg via Getty Images)

The real AI nightmare: What if it serves humans too well?

Op-Ed,Artificial Intelligence

Brian Kateman March 31, 2024

The age of

AI artificial intelligence

has begun, and it brings plenty of new anxieties. A lot of effort and money are

being

devoted to ensuring that AI will do only what humans want. But what we should be more afraid of is AI that will do what humans want. The real danger is us.

Thats not the risk that the industry is striving to address. In February, an entire company

,

named Synth Labs

,

was founded for the express purpose of AI alignment, making

AI it

behave exactly as humans intend. Its investors include M12, owned by Microsoft, and First Start Ventures, founded by former Google

CEO Chief Executive

Eric Schmidt. OpenAI, the creator of ChatGPT, has promised 20% of its processing power to superalignment that would steer and control AI systems much smarter than us. Big tech is all over this.

And thats probably a good thing because of the rapid clip of AI technological development. Almost all of the conversations about risk have to do with the potential consequences of AI systems pursuing goals that diverge from what they were programmed to do and that are not in the interests of humans. Everyone can get behind this notion of AI alignment and safety, but this is only one side of the danger. Imagine what could unfold if AI

does

do what humans want.

What humans want, of course, isnt a monolith. Different people want different things and have countless ideas of what constitutes the greater good. I think most of us would rightly be concerned if an artificial intelligence were aligned with Vladimir Putins or Kim Jong Uns visions of an optimal world.

Even if we could get everyone to focus on the well-being of the entire human species, its unlikely wed be able to agree on what that might look like. Elon Musk made this clear last week when he shared on X

, his social media platform,

that he

was

concerned about AI pushing for forced diversity and being too woke. (This on the heels of Musk filing a lawsuit against OpenAI, arguing

that

the company

wa

s not living up to its promise to develop AI for the benefit of humanity.)

People with extreme biases might genuinely believe that it would be in the overall interest of humanity to kill anyone they deem

ed

deviant. Human-aligned AI is essentially just as good, evil, constructive or dangerous as the people designing it.

That seems to be the reason

that

Google DeepMind, the corporations AI development arm, recently founded an internal organization focused on AI safety and preventing its manipulation by bad actors. But its not ideal that whats bad is going to be determined by a handful of individuals at this one particular corporation (and a handful of others like it) complete with their blind spots and personal and cultural biases.

The potential problem goes beyond humans harming other humans. Whats good for humanity has, many times throughout history, come at the expense of other sentient beings. Such is the situation today.

In the U.S. alone, we have billions of animals subjected to captivity, torturous practices and denial of their basic psychological and physiological needs at any given time. Entire species are subjugated and systemically slaughtered so that we can have omelets, burgers and shoes.

If AI does exactly what we (whoever programs the system) want it to, that would likely mean enacting this mass cruelty more efficiently, at an even greater scale and with more automation and fewer opportunities for sympathetic humans to step in and flag anything particularly horrifying.

Indeed, in factory farming, this is already happening, albeit on a much smaller scale than what is possible. Major producers of animal products

like such as

U.S.-based Tyson Foods, Thailand-based CP Foods and Norway-based Mowi have begun to experiment with AI systems intended to make the production and processing of animals more efficient. These systems are being tested to, among other activities, feed animals, monitor their growth, clip marks on their bodies

,

and interact with animals using sounds or electric shocks to control their behavior.

A better goal than aligning AI with humanitys immediate interests would be what I would call sentient alignment AI acting in accordance with the interest of all sentient beings, including humans, all other animals and, should it exist, sentient AI. In other words, if an entity can experience pleasure or pain, its fate should be taken into consideration when AI systems make decisions.

This will strike some as a radical proposition, because whats good for all sentient life might not always align with whats good for humankind. It might

in fact

sometimes, even often, be in opposition to what humans want or what would be best for the greatest number of us. That might mean, for example, AI eliminating zoos,

;

destroying nonessential ecosystems to reduce wild animal suffering

;

or banning animal testing.

Speaking recently on the podcast

All Thinks Considered,

Peter Singer, philosopher and author of the landmark 1975 book Animal Liberation, argued that an AI systems ultimate goals and priorities are more important than it being aligned with humans.

The question is really whether this superintelligent AI is going to be benevolent and want to produce a better world, Singer said, and even if we dont control it, it still will produce a better world in which our interests will get taken into account. They might sometimes be outweighed by the interest of nonhuman animals or by the interests of AI, but that would still be a good outcome, I think.

Im with Singer on this. It seems like the safest, most compassionate thing we can do is take nonhuman sentient life into consideration, even if those entities interests might come up against whats best for humans. Decentering humankind to any extent, and especially to this extreme, is an idea that will challenge people. But thats necessary if were to prevent our current speciesism from proliferating in new and awful ways.

What we really should be asking is for engineers to expand their own circles of compassion when designing technology. When we think safe, lets think about what safe means for all sentient beings, not just humans. When we aim to make AI benevolent, lets make sure that that means benevolence to the world at large not just a single species living in it.

Brian Kateman is co-founder of the Reducetarian Foundation, a nonprofit organization dedicated to reducing societal consumption of animal products. His latest book and documentary is

Meat Me Halfway

.

Leave a Reply

Your email address will not be published. Required fields are marked *