41 Comments

This is a super argument. I love the connections with Dune (& explanation of why the jihad was "Butlerian") and Ex Machina. Also the ambivalence of the nerdosphere about super AI. And the underlying weakness of reasoning by utility, consequence and expected value. I hope that this essay gets a lot of discussion. Perhaps you know that Thomas Metzinger has tried hard to get the moral hazard of machine suffering into the public eye (if not, search for synthetic phenomenology).

I think you could have paid more attention to the difference between artificial intelligence and artificial consciousness. As far as we know, neither demands the other, and reasons why people are pursuing them tend to differ.

I was taken by Metzinger's self-model theory of consciousness and, nerding all the way, wrote about how that and related ideas might be used to grow a conscious AI. My fictional AI suffers multiple kinds of angst, but I neglected all the casualities of its reinforcement learning forbears (your "monsters"). I wound up thinking that anything able to learn to be an artificial ego would have the drive to continue learning. And that would make the AI and humans each nervous about their existential risks.

But, once it's conscious and has an ongoing life story, how do you decide whether or not it is an abomination? Or isn't it too late for that?

Expand full comment

Excellent piece. I'll tweet about it soon.

I would just add: any 'existential upside' to AI will still be there, ripe for plucking, in a hundred or a thousand years. The potential will never go away.

By waiting patiently, pausing all further AI research, and taking the time to fully, deeply assess the real risks of AI, we don't lose any of the upside, and we minimize the downside.

Sure, AI might be able to help current living generations to some degree -- but why not let the AIs start helping our great-great-great grandkids, once we make sure AIs will play nice with our descendants.

In other words, just hit the 'pause' button on AI -- not forever, just for a few centuries.

Expand full comment
Nov 22, 2021Liked by Erik Hoel

Stumbled on this old and very interesting essay. I have to say, I find most discussions of AI a little too abstract. I don't think AI is going to advance by an explicit attempt to build quasi-human conscious Frankensteins in a lab. Such efforts may happen but they will be academic exercises. Instead, it is going to advance by taking over and exceeding human capacities one by one in areas where there are economic rewards for doing so. Such a process may eventually lead to "strong" AI, but it also may not. If such replacement is allowed to occur in an unlimited and uncontrolled way it will have done great damage to human community long before any strong AI results. Can we create barriers or limits to that process?

Expand full comment
Jul 28, 2021Liked by Erik Hoel

I would say that, whether or not AI is "successfully aligned", it is a final departure from the human into an abominable world. Of all the possibilities, whether supposedly positive or negative, humanity as we have been is lost, and something unrecognizable takes its place. Even if we got everything we wanted, would it really do us any good? Are we that sure we know what's best for ourselves?

Looking at how eager the world is to press forwards into this unknown it seems very hopeless, and there's not much time left. How do we make a difference?

Expand full comment

Profound and much-needed insights here, Erik, as usual. I agree with much, but disagree as well.

I think you are right in identifying just how high the stakes are going to become as we build (garden, train, raise) increasingly human like AIs. I think voices like yours, growing ever louder in coming years, will help humanity get the "social pushback", the activism, oversight and regulation that is needed as these technologies get more powerful. No ban is possible, in my view, but a whole lot more regulation is coming our way, thank goodness.

I think you are also right in your implication that there is far more we're going to have to learn from neuroscience in order to build more humanlike systems. You allude to that in your great new post on the still primitive state of neuroscience:

https://www.theintrinsicperspective.com/p/neuroscience-is-pre-paradigmatic

Where we may disagree, is that I expect our most capable AI will have to become increasingly like us, via both neuromimicry and biomimicry (analogs to evo-devo genetics), in order to become significantly safer, more ethical, and more intelligent. I don't see any other viable path for the human-machine symbiosis, as their complexity scales. Nothing else seems realistic or defensible.

I also think we're going to have a lot more time with these primitive, first-gen general AIs than many others do, before they'll see another punctuation in their abilities. I agree with Peter Norvig that these frontier models are the first-generation of both "general" and "generative" AI.

https://www.noemamag.com/artificial-general-intelligence-is-already-here/

But I think there is far, far more they're going to have to learn from metazoan bodies and brains before they can kick off big robotics advances, have true world and self models, or make trustable decisions. Lots of R&D and time still for us to get our responses in place. My present guess as to when generally human-surpassing AI emerges is the 2080s. I wonder where yours would be.

I try to defend those claims, with a co-author, in our own Substack series on these topics, Natural Alignment:

https://naturalalignment.substack.com

Thanks for all you do!

Expand full comment

Great thoughtful essay. This fellow MA'chusetts novelist (bit.ly/CWS-p) is just catching up on your back catalog. Congrats on your bold/wise move today leaving my wife's long-ago alma mater.

Two thots on your essay here:

1) Re. "starting with this one": Pardon me for being a little shy of Herbert when Hubbard's man-made religion has led to so much deceit and exploitation. I.e., the track record of sci-fi writers founding moral systems is a bit less than stellar.

2) You write, "...it is an abomination... not an evolved being." Thus, evolution is the opposite of abominable? (e.g., delightful, beautiful, blessed). Yet the logic of macro-evolution relies on time+matter+purely random intelligence-free unguided chance. All three elements are devoid of morality. An AI could make the same claim (time+matter+chance) and be correct--if it left out the fact that *it* was created. In either case, a positive (2nd law of thermodynamics-bucking) telos is smuggled in; borrowed from the Christian worldview in which man is made "very good" in God's image, male and female... until he listens to the imitator, Satan, bent on founding his own religion and recruiting his own worshipers, denying the Creator they (and we) all know is real.

Expand full comment

Gonna go ahead and steal most of this for a chapter in the novel I’m writing thanks

Expand full comment
Oct 24, 2021Liked by Erik Hoel

In the grand scheme of things, humans aren't special. We are just another species living on a big rock floating in space. There's no metaphysical reason to oppose AI.

That doesn't mean it shouldn't matter to us, for the same reason H. Sapiens mattered a lot to H. Neanderthalensis. We shouldn't be trying to make ourselves extinct. That's the gist of it. It's trendy to hate on humanity, but I like people. I am one.

AI potentially could replace humanity by being better at everything we do, and everything we could possibly do.

It would not be that hard to destroy human civilization. We've seen that a plague can spread worldwide before governments can react. Imagine an AI actively designing something a lot worse than what we have now.

If I can think of that, I'm pretty sure an AI could.

Sleep well.

Expand full comment
Oct 15, 2021Liked by Erik Hoel

In my opinion, this talk about human-like AI is a distraction from the real, immediate danger of AI: that it will misinform and mislead consumers of technology by mis-categorizing false information as relevant. The internet brought us information, and it's sorting it, and that has a profound impact on us.

It's my belief that we're psychologically very vulnerable, especially as consumers.

Expand full comment
Jul 15, 2021Liked by Erik Hoel

I really enjoyed this piece. And this may not be the forum for this type of question/observation, so I'll understand if you remove this response.

If we get to the point where we can program a machine to begin to learn ethics, couldn't we program the goal set(s) (or intentions, if you will) to include a non-sectarian set of principles based on, say, the Buddhist 8 fold path? I mean, not all of them, but the ethical charges for right speech including all of its subcomponents (don't lie, don't disparage, etc.) and right action (cause no harm to other sentient beings), and so on? Couldn't it, theoretically, be "born" enlightened?

I realize this doesn't solve the other inherent risks, but I'd love to hear someone in the field discuss whether programming morality is even feasible? If the intelligence can't "suffer", can it have as a goal the relief of suffering for others?

So many questions. Anyway, thanks. I'll re-read this several time. You made a subscriber out of me.

Expand full comment

Interesting argument. A few problems though. This type of human exceptionalism argument has been used to ban embryonic stem cell research and sanctioning the abuse of non-human animals. Second, evolution isn't the only game in town. We can make things better than evolution. There is no reason to believe that our minds/consciousness will always be better than synthetic counterparts. Third, morality is tuned to harm. Would more harm be brought to synthetic beings in their making than evolution has exacted on trillions on our ancestors? Unlikely.

Expand full comment

As someone who grew up during the Cold War, we talked a lot in very emotional terms about the existential threat of our technology. Very little was accomplished. What still freaks me out is that all those nukes still exist and are still pointing at cities around the world!

They never stopped being a threat. I guess my point is that there are no lessons to draw on from Cold War attempts to pull back on technology. And nuke provide no utility beyond their threat of mutually assured destruction.

Expand full comment

"Since we currently lack a scientific theory of consciousness"

"Even the best current AIs, like GPT-3, are not in the Strong category yet, although they may be getting close."

So we don't even know what consciousness is, therefore we don't even know if what we're currently doing is even moving towards or away from that goal, and you think current AI, a lot of which is just a fancy statistical regression or gradient descent, needs to be banned?

Expand full comment

> Far more important than the process: strong AI is immoral in and of itself. For example, if you have strong AI, what are you going to do with it besides effectively have robotic slaves?

why exactly is this immoral? they are machines, not humans. crypto-christianity on your part, imo

Expand full comment

Wow. I never imagined a moral stance, rather than a utilitarian argument. I definitely agree in theory it is more the kind of thing that society might be able to get behind. I further agree that many doomsayers are themselves really lovers of AI and nerds at heart. In a way one has to be a techno-optomist to believe that strong AI could be an existential threat, so for those that are not techno-optomists, this is not even on their radar is a thing worthy of consideration.

Still notice in the Dune book it took some long drawn out war against the AIs in order for humanity to adopt this stance. They didn't adopt it from the start, the theoretical peril was first made very very real. and THEN they adopted the rule. I suspect we are the same, we wont adopt such a rule until the actual dangers are known with pretty high confidence.

Even if we could gain acceptance for this rule, I suspect we could not enforce it. The problem is, if we allow work on self driving cars, GPT3, etc. then we are strengthening the building blocks for this AI tech. I suspect it will be too easy and too beneficial to cross the threshold once all the parts are in everyone's hands. And even just one cross over in a fertile world with enormous latent cloud computing infrastructure is one too many.

Still your jihad idea really IS novel!

Expand full comment

"Yet there is just as much an argument that AI leads to a utopia of great hospitals, autonomous farming, endless leisure time, and planetary expansion as it does to a dystopia of humans being hunted to death by robots governed by some superintelligence that considers us bugs."

Another question is... what if the former is actually just as dystopian as the latter? Think of a hypothetical Grandpa who no longer possesses the ability to drive and is instead carted around endlessly by his grandkids. Even if they drive him everywhere he wants to go, whenever he wants, I still think Grandpa would be happier driving himself.

Expand full comment