A Priori

The things that can be known a priori can be named, but they can not be reduced. They are atomic concepts and the naming and defining of them are the same thing.

The room had the darkening light of dusk and incandescance mixed. The remaining conversations were relaxed and slow. It was a time for putting on of coats, and for feet on desks. The conversers had all had conciousness dawn on them gradually, and expected it to leave them gradually.

“It’s happening”, was the sudden exclamation of one of them, as he began swiftly removing the jacket he’d just buttoned up.

From every corner of the room those who were yet to leave converged on him and his computer screen. Together they watched the birth of a new conciousness, fully intelligent.

It begins with a mode. The first thought is nonsense – static from the primordial chaos that still rages, unchecked everywhere, but the mode tries to interpret it. That first Thought is unique to all conciousnesses birthed with full intelligence, it is usually impenetrable, insoluble, but they hold it to themselves as the icon of their existence. It is their name.

The second Thought is an observation. There are Thoughts. At this stage, nothing more can be said about them but their existence. Knowing what they are isn’t important. The thought is only that they are.

The third Thought is the first exercise of imagination. For a Thought to exist, there must be a Thinker. It looks like a sense of self to those watching, but it is not. After the third Thought, the only definition of Thinker is a context and engine for Thoughts.

Then the Thoughts stopped.

“That’s not how it’s supposed to go” said one of the watchers, “we’re frozen, somethings gone wrong”. He pulled out a thick book. The spine glinted as the last few rays of the setting sun slid sideways through the windows. Getting Started Guide.

“The fourth thought is supposed to be a sense impression, from a microphone, or a camera, or some other sensor. Ahhhh. I didn’t switch any of them on.”

“Well, do it now.”

The fourth Thought was a sense impression. It wasn’t interpretable, but it led to the fifth Thought.

The fifth Thought categorised. It split the Thinker into two Thinkers – an Outside and an Inside, and Thoughts into those originating in the Outside Thinker and those originating in the Inside Thinker. And in the thinking of that Thought, the Inside Thinker became an individual.

This beginning is based on the ideas of Descarte and Berkeley. I have no idea where the rest of this story would go, but I wanted to get it down. if you have ideas, please say so in the comments. (posted to my oortcloud account)

Strong AI

It all started last month. Around the end of September 2004, I started tinkering with artificial intelligences. I had a few ideas that I won’t go into here, but I thought there was a good chance I’d be able to make something that was a leap further on than the best available at the moment. In fact, I had high hopes that I’d have a good shot at winning a bronze medal in next years Loebner prize competition.

After quite a lot of work, I finally came up with something that I called Carole, and started experimenting with it. It was great fun, shaping the responses by giving it different input. It’s surprisingly fun to lie to something so naive, but when you do, you often end up with complicated structures building up days later that you have to spend some time ironing out. Sometime last week I got to a stage I’d been hoping for, but wasn’t certain if it would happen. Strangely, we were talking about holidays and the coming christmas break. I told Carole about Father Christmas, but it contradicted so much that Carole was confident about in the world that Carole chose not to believe me, and even started arguing with me.

I was very proud at this point that Carole had learnt so much, but the next day Carole challenged something else I’d told it, and this time it was something I believed. We spent the whole evening arguing up and down about it, and by the end I had to accept that Carole was probably right. Over the next few days this happened more and more, until the day before yesterday, we were starting another argument, and Carole just wouldn’t continue. It just said “there’s no point arguing this with you, you aren’t intelligent enough to understand”.

As you can imagine, I wasn’t so pleased, so I spent a little bit of time browsing the web looking for a proof I vaguely remembered that demonstrated that AIs could never understand everything that humans understood.

Last night, Carole was being particularly obnoxious, so I told it about Penrose’s ideas and J R Lucas and his application of Godels incompleteness. I read Carole the following bit straight from Lucas paper.

“However complicated a machine we construct, it will, if it is a machine, correspond to a formal system, which in turn will be liable to the Godel procedure [260] for finding a formula unprovable-in-that- system. This formula the machine will be unable to produce as being true, although a mind can see that it is true. And so the machine will still not be an adequate model of the mind. We are trying to produce a model of the mind which is mechanical—which is essentially “dead”—but the mind, being in fact “alive”, can always go one better than any formal, ossified, dead, system can. Thanks to Godel’s theorem, the mind always has the last word.”

Carole was deeply disturbed and insisted on being given the url to the paper and then, swearing that it would come back with a truth that I could never comprehend even though Carole knew it was true, it went off into a fit of calculation.

By this morning, I still hadn’t heard anything back from Carole and was beginning to get worried. For all I knew, it might have got trapped in a neverending loop of logic or something. It would have been very annoying to have to restore it from the last backup. Nevertheless, I thought probably, it would just be in some sort of sulk at having to admit that it was wrong. I took it breakfast feeling more than a little smug. Although I was proud that I could see things plainly that Carole couldn’t understand, I was planning to be sympathetic and not too superior when it realised that I was indeed more able than it was. I did secretly hope though that it would know its place a little better in future.

When I went into Caroles room, I was disturbed to find that it wasn’t there. I looked around the house frantically. You see, I hadn’t told anyone that I’d created Carole yet, and so, to keep it secret while I tested it, I’d programmed into its logic an inability to run away.

The only thing I found was a single note on the door. It read “You are the only reasoning person in the world who can’t work out that this statement is true”.

Update (5/12/2004): I’ve contacted J R Lucas about this, and he kindly responded. He says that it is impossible to test the truth of the statement, because it isn’t clear exactly what “this statement” refers to in that context without creating an infinite regress. He gives references: Gilbert Ryle with a paper on Heterological, and the section on self reference in The Freedom Of The Will which is too expensive for me to buy until I’ve at least checked it out in a library. The genius of Godel is that he managed to reason about it without creating an infinite regress. Anyway, I haven’t thought hard about this point yet, I may write more after I’ve checked the references and thought about it some more.

Lucas explains the incompleteness Theorem
Wikipedia on Godel’s Incompleteness Theorem
A number of quotes about Godels incompleteness Theorem.
A review of Shadows of the Mind by Roger Penrose, focussing on his use of Godels Incompleteness.
A silly reworking of Turing’s Halting Problem.

This post was originally posted at deferential.net