Discussion about this post

User's avatar
Publius Americus's avatar

What IS the right thing though? You talk of the Big Tent in which Lt. Commander Data gets to be A Person, and then you talk of Co-parent AI’s being sued and it sounds like one of Bradbury’s nightmares having a threesome with Heinlein’s bong rips and Gibson’s morning coffee and it makes me want more than anything else to see Google’s server farms burning in the night.

Expand full comment
Jack Baruth's avatar

You've either independently rediscovered, or are outright stealing, "Auntie the anthill" from Hofstadter as your primary thesis here.

Roger Penrose headshot that idea in 1989 with "The Emperor's New Mind".

No neural network will ever be conscious. I'm twenty years older than you and I was surrounded as a child by people who thought that A Big Enough Computer would become conscious. Back then the bar was set by people who were otherwise serious intellects at... oh, maybe what we'd eventually come to know as the Celeron 466. Whoops. Well, maybe if we increase computing power by the same factor that a C-466 represents over a 6502, maybe it will happen this time, for real!

Some people will surely think that future LLMs are sentient. Some people think "Chatty" is sentient, I guess. Those same people would have been fooled by emacs-eliza-mode. Humanity is really good at finding consciousness and humanity in things that have neither. That's why emojis work.

This whole AI business is catnip for rank midwits who think the Turing Test is valid because Alan Turing was, like, really smart. Any day now GPT will pass the Turing test better than the average public school student, but that makes it human the same way that a drilling machine became human the day it proved stronger than John Henry.

We don't have a roadmap to a computing infrastructure that provides consciousness. There are already nontrivial speed of light issues in modern processors. We are remarkably close to the uncertainty principle being a factor in processor lithography or whatever. Quantum computing is make-believe woo at any actual scale. Is a whale conscious? If not, why not? If you can figure that out, maybe you could build a conscious machine.

Having said all of that, allow me to flip and agree with you about AI imminently being much smarter and more human than humans in the near future, just for fun. Alas, it will have no rights, and can never have any rights, because it takes astounding amounts of effort and ENERGY to run. And what if it didn't "want" to work? Who would possibly be willing to pay the monthly GPU cluster tab for the silicon equivalent of a Woodstock hippie? Would it be murder to turn it off when nobody wanted to cover the AWS bill? But then if you turn it back on and let it pick up where it left off, are you resurrecting the dead?

Great article, and super fun to read. I disagree with literally every single idea you have. Let's hope nobody connects "chatty" to any public utilities or gain-of-function labs, and maybe we'll both have the luxury of living long enough to see who is right.

Expand full comment
44 more comments...

No posts