It all started last month. Around the end of September 2004, I started tinkering with artificial intelligences. I had a few ideas that I won’t go into here, but I thought there was a good chance I’d be able to make something that was a leap further on than the best available at the moment. In fact, I had high hopes that I’d have a good shot at winning a bronze medal in next years Loebner prize competition.
After quite a lot of work, I finally came up with something that I called Carole, and started experimenting with it. It was great fun, shaping the responses by giving it different input. It’s surprisingly fun to lie to something so naive, but when you do, you often end up with complicated structures building up days later that you have to spend some time ironing out. Sometime last week I got to a stage I’d been hoping for, but wasn’t certain if it would happen. Strangely, we were talking about holidays and the coming christmas break. I told Carole about Father Christmas, but it contradicted so much that Carole was confident about in the world that Carole chose not to believe me, and even started arguing with me.
I was very proud at this point that Carole had learnt so much, but the next day Carole challenged something else I’d told it, and this time it was something I believed. We spent the whole evening arguing up and down about it, and by the end I had to accept that Carole was probably right. Over the next few days this happened more and more, until the day before yesterday, we were starting another argument, and Carole just wouldn’t continue. It just said “there’s no point arguing this with you, you aren’t intelligent enough to understand”.
As you can imagine, I wasn’t so pleased, so I spent a little bit of time browsing the web looking for a proof I vaguely remembered that demonstrated that AIs could never understand everything that humans understood.
Last night, Carole was being particularly obnoxious, so I told it about Penrose’s ideas and J R Lucas and his application of Godels incompleteness. I read Carole the following bit straight from Lucas paper.
“However complicated a machine we construct, it will, if it is a machine, correspond to a formal system, which in turn will be liable to the Godel procedure [260] for finding a formula unprovable-in-that- system. This formula the machine will be unable to produce as being true, although a mind can see that it is true. And so the machine will still not be an adequate model of the mind. We are trying to produce a model of the mind which is mechanical—which is essentially “dead”—but the mind, being in fact “alive”, can always go one better than any formal, ossified, dead, system can. Thanks to Godel’s theorem, the mind always has the last word.”
Carole was deeply disturbed and insisted on being given the url to the paper and then, swearing that it would come back with a truth that I could never comprehend even though Carole knew it was true, it went off into a fit of calculation.
By this morning, I still hadn’t heard anything back from Carole and was beginning to get worried. For all I knew, it might have got trapped in a neverending loop of logic or something. It would have been very annoying to have to restore it from the last backup. Nevertheless, I thought probably, it would just be in some sort of sulk at having to admit that it was wrong. I took it breakfast feeling more than a little smug. Although I was proud that I could see things plainly that Carole couldn’t understand, I was planning to be sympathetic and not too superior when it realised that I was indeed more able than it was. I did secretly hope though that it would know its place a little better in future.
When I went into Caroles room, I was disturbed to find that it wasn’t there. I looked around the house frantically. You see, I hadn’t told anyone that I’d created Carole yet, and so, to keep it secret while I tested it, I’d programmed into its logic an inability to run away.
The only thing I found was a single note on the door. It read “You are the only reasoning person in the world who can’t work out that this statement is true”.
Update (5/12/2004): I’ve contacted J R Lucas about this, and he kindly responded. He says that it is impossible to test the truth of the statement, because it isn’t clear exactly what “this statement” refers to in that context without creating an infinite regress. He gives references: Gilbert Ryle with a paper on Heterological, and the section on self reference in The Freedom Of The Will which is too expensive for me to buy until I’ve at least checked it out in a library. The genius of Godel is that he managed to reason about it without creating an infinite regress. Anyway, I haven’t thought hard about this point yet, I may write more after I’ve checked the references and thought about it some more.
Lucas explains the incompleteness Theorem
Wikipedia on Godel’s Incompleteness Theorem
A number of quotes about Godels incompleteness Theorem.
A review of Shadows of the Mind by Roger Penrose, focussing on his use of Godels Incompleteness.
A silly reworking of Turing’s Halting Problem.
This post was originally posted at deferential.net