32 Comments
Feb 16, 2022Liked by Erik Hoel

I really like this, it reminds me of the short film Sunshine Bob, which beautifully portrays human helplessness in the face of an increasingly unlistening world.

https://www.youtube.com/watch?v=645OxM9MePA

Expand full comment

Great post––the connection between black box AI and kamis is an interesting insight.

Have you read "The Restless Clock" by Jessica Riskin? In it, she argues that pre-Reformation, people in medieval Europe tended to imbue mechanical automata with a kind of vital spirit as well. I haven't verified the extent to which this is true but I thought it was an interesting connection nonetheless.

It also reminded me of something I've been thinking about with respect to "explainability" and AI. I work with neural language models a fair amount for my own research (like BERT or GPT-3), and these are in some sense prototypical black boxes. Even though in theory, we (or at least someone, somewhere) know the precise matrix of weights specifying the transformations of input across each layer, this feels somehow unsatisfactory as an explanation––perhaps because it doesn't really allow us to make generalizations about *informational properties* of the input? And so there's an odd sense in which practitioners have gone full circle, and we are now using the same battery of psycholinguistic tests––designed to probe the original black box of the human mind––to probe these models. See, for example:

Futrell, R., Wilcox, E., Morita, T., Qian, P., Ballesteros, M., & Levy, R. (2019). Neural language models as psycholinguistic subjects: Representations of syntactic state. arXiv preprint arXiv:1903.03260. (https://arxiv.org/abs/1903.03260)

Expand full comment

>During the industrial age, only other humans had minds, never machines.

Of course, animals (including Minerva) have a mind, the workings of which may not correspond to how our own, human, minds work. Perhaps animal minds are the original AI? There is a long tradition of anthropomorphizing the thought process of animals.

>My credit card now has a kami.

Maybe, in a century or so, we will worship specific AIs.

Expand full comment

Indeed, this went to my promo tab in gmail even though I've moved you before into my primary. Gmail doesn't like Substack. Every Substack newsletter I get goes to the wrong tab. It makes me not want to use the platform myself... But great article! Love the Kami comparison.

Expand full comment
Feb 27, 2022Liked by Erik Hoel

Erik, I truly loved reading this piece -- it hits an intellectual sweet spot -- an intersection of science, mysticism, culture and maybe philosophy -- where to begin?

First, I've had quite a few such experiences -- AI sending my earnest emails into other people's spam folders -- and also human ones, in which a programmer makes "Cretan" errors and the rest of the organization effectively assumes there is no appeal to the man behind the curtain -- I once had a senior help technician helping me troubleshoot a problem with his company's anti-virus program -- he ran out of failure modes and said "a virus got in it." That is some sort of meta-Cretan error.

Second, your piece points to the question of an AI that monitors other AIs for impenetrable, inarticulatable errors. Might something like that take months of real-world training to catch and repair such errors? And of course it would add some kind regressive false results -- in which X% of AI errors are solved, yet introducing another X% of AI monitor errors. Maybe if there was a way to document the original AI's decisions, the AI monitor could read that documentation, and so perhaps learn how to repair the error in the decision(s) that cause malfunction(s)?

I completely agree with and emotionally support you saying "AI makes animists of us all" but the logical positivist in me is not ready to concede the point. On the one hand, when you say logical positivism denies "meaning to all statements that can’t be tested" my reply is “absolutely they have no meaning if they can’t be tested!” On the other hand, my reply is “what do you mean by ‘test’?” (this is what “truth units” is about, I’ll have something on Medium about it sooner rather than later).

Again, it was truly exciting to read this piece – so well written and so very intellectually engaging – it maps so well one of the truly great frontiers of human culture we face now.

Expand full comment
Feb 18, 2022Liked by Erik Hoel

Oh wow this is awesome! Came across your newsletter via The Sample a few days ago (and subscribed with a different email ID) but this stuck out because it suddenly metaphors the whole story into something else!

The style reminds me of what we try to approach at my publication, Snipette, and it's really cool how you can suddenly look at the world in a whole new way. Actually, at the first section I was like "okay, typical credit card complaint/fraud story", but then I remembered the title, and then I read on down, and then I was just "whoa 😮" all the way through!

Expand full comment
Feb 18, 2022Liked by Erik Hoel

Erik, this is such an odd attack on AI. Basically you are disappointed that the decision surfaces used to make decisions are not simple. would you want college admissions to be executed on a simple decision surface? whom you should marry? which joke is funny?

The thing that is new is that the complex, in-articulate surface is not originating within a biological entity. I agree that is new. but then you go on to rail against the injustice of in-articulate surfaces that affect your life.

but really, would you want the outcome of your next date to be determined by a simple surface?

yet those outcomes have HUGE impact.

we have always lived in a world with opaque decisions affecting our most prized things. we just called it ... life...

Expand full comment

The metaphor of the kami is a good one, but here's another. Before algorithms, decisions like those were made by people. Usually we can understand people to a large extent because we are similar to them, but sometimes not: Consider, for instance, moving to a country whose culture and language you understand poorly. You want to buy a property, you go to a bank, and the banker tells you in very laborious English that your application was denied. He's unable to explain why due to the language barrier, and you can't guess because you don't know the culture.

And of course, cultural distances can also occur within a single country, for example due to social class.

Algorithms, first applied to people (decision trees, etc.), and then to machines, have bridged cultural distances and made many processes legible, but perhaps it was indeed just a matter of time before those cultural distances became gulfs again.

Expand full comment
Feb 16, 2022Liked by Erik Hoel

I really liked it, and yes: my newsletter keep going in Promotions...

Expand full comment
Feb 16, 2022Liked by Erik Hoel

I am not a technician nor do I have any up to date training in information technology since my last encounter with real circuitry occurred back in 1944 when I trained in the US Army Air Force as a radar technician. But it seems obvious to me that almost all the essential controls of our current civilization have been ceded to the digital complex which is easily invaded by the growing decision making AI complex to create that SF monster Frankenstein that could easily relegate all of humanity to the fictional meta game playing jungles of infinite fantasy while the real world is deftly handled by a rapidly evolving intellect more powerful and impenetrably complex wherein humanity, if it does not ultimately destroy everything, will become amusing pussycats of no real power or importance.

Expand full comment

Excellent explanation of why my Chase Amazon card declines purchases of $1.22 then accepts one of $14.95.

Thanks.

Expand full comment

I suppose I fail to see the difference between the new world you describe and the old world. It used to be that the ancients observed patterns in nature and ascribed them to spirits, or Laws, or The Gods -- is any of this different from thinking there's a "black box" and trying to form a hypostatic abstraction that conciliates various observations? We think we know other people but their real internals are beyond us (even conceptually, given quantum free will and all that, and certainly practically given the complexity of the brain, not to mention reflexivity and the like) -- isn't there a black box there, too? (Skinner and disciples certainly got a lot of leverage from the idea.)

We're always doing model induction! AI is a new, complicated, prevalent thing, but there are plenty of other examples all around. The new Kami was just the Gods of the Copybook headings.

Expand full comment

It's a very evocative metaphor

Expand full comment

I can't help but think that, if Kami lives inside every IOT device, sooner or later some of us will go the route of Seances and Ritual Possessions, think of it as Megaman Battle Network Technology blended with spiritual forces akin to SMT or Persona 5. (Assuming "There Is No AI Risk" is real, that is) Memetic fusion with the body would possibly become the new vehicle for personal transformation.

Expand full comment
Comment removed
Expand full comment
deletedFeb 16, 2022Liked by Erik Hoel
Comment deleted
Expand full comment