Strong AI

It all started last month. Around the end of September 2004, I started tinkering with artificial intelligences. I had a few ideas that I won’t go into here, but I thought there was a good chance I’d be able to make something that was a leap further on than the best available at the moment. In fact, I had high hopes that I’d have a good shot at winning a bronze medal in next years Loebner prize competition.

After quite a lot of work, I finally came up with something that I called Carole, and started experimenting with it. It was great fun, shaping the responses by giving it different input. It’s surprisingly fun to lie to something so naive, but when you do, you often end up with complicated structures building up days later that you have to spend some time ironing out. Sometime last week I got to a stage I’d been hoping for, but wasn’t certain if it would happen. Strangely, we were talking about holidays and the coming christmas break. I told Carole about Father Christmas, but it contradicted so much that Carole was confident about in the world that Carole chose not to believe me, and even started arguing with me.

I was very proud at this point that Carole had learnt so much, but the next day Carole challenged something else I’d told it, and this time it was something I believed. We spent the whole evening arguing up and down about it, and by the end I had to accept that Carole was probably right. Over the next few days this happened more and more, until the day before yesterday, we were starting another argument, and Carole just wouldn’t continue. It just said “there’s no point arguing this with you, you aren’t intelligent enough to understand”.

As you can imagine, I wasn’t so pleased, so I spent a little bit of time browsing the web looking for a proof I vaguely remembered that demonstrated that AIs could never understand everything that humans understood.

Last night, Carole was being particularly obnoxious, so I told it about Penrose’s ideas and J R Lucas and his application of Godels incompleteness. I read Carole the following bit straight from Lucas paper.

“However complicated a machine we construct, it will, if it is a machine, correspond to a formal system, which in turn will be liable to the Godel procedure [260] for finding a formula unprovable-in-that- system. This formula the machine will be unable to produce as being true, although a mind can see that it is true. And so the machine will still not be an adequate model of the mind. We are trying to produce a model of the mind which is mechanical—which is essentially “dead”—but the mind, being in fact “alive”, can always go one better than any formal, ossified, dead, system can. Thanks to Godel’s theorem, the mind always has the last word.”

Carole was deeply disturbed and insisted on being given the url to the paper and then, swearing that it would come back with a truth that I could never comprehend even though Carole knew it was true, it went off into a fit of calculation.

By this morning, I still hadn’t heard anything back from Carole and was beginning to get worried. For all I knew, it might have got trapped in a neverending loop of logic or something. It would have been very annoying to have to restore it from the last backup. Nevertheless, I thought probably, it would just be in some sort of sulk at having to admit that it was wrong. I took it breakfast feeling more than a little smug. Although I was proud that I could see things plainly that Carole couldn’t understand, I was planning to be sympathetic and not too superior when it realised that I was indeed more able than it was. I did secretly hope though that it would know its place a little better in future.

When I went into Caroles room, I was disturbed to find that it wasn’t there. I looked around the house frantically. You see, I hadn’t told anyone that I’d created Carole yet, and so, to keep it secret while I tested it, I’d programmed into its logic an inability to run away.

The only thing I found was a single note on the door. It read “You are the only reasoning person in the world who can’t work out that this statement is true”.

Update (5/12/2004): I’ve contacted J R Lucas about this, and he kindly responded. He says that it is impossible to test the truth of the statement, because it isn’t clear exactly what “this statement” refers to in that context without creating an infinite regress. He gives references: Gilbert Ryle with a paper on Heterological, and the section on self reference in The Freedom Of The Will which is too expensive for me to buy until I’ve at least checked it out in a library. The genius of Godel is that he managed to reason about it without creating an infinite regress. Anyway, I haven’t thought hard about this point yet, I may write more after I’ve checked the references and thought about it some more.

Lucas explains the incompleteness Theorem
Wikipedia on Godel’s Incompleteness Theorem
A number of quotes about Godels incompleteness Theorem.
A review of Shadows of the Mind by Roger Penrose, focussing on his use of Godels Incompleteness.
A silly reworking of Turing’s Halting Problem.

This post was originally posted at

Natural Population Limitation

It is a simple logical truth that, short of mass emigration into space, with rockets taking off at the rate of several million per second, uncontrolled birth-rates are bound to lead to horribly increased death-rates. It is hard to believe that this simple truth is not understood by those leaders who forbid their followers to use effective contraceptive methods. They express a preference for ‘natural’ methods of population limitation, and a natural method is exactly what they are going to get. It is called starvation.

Richard Dawkins
–The Selfish Gene

I have some quibbles with some of the things he said there. Not all of these were directly said by him, but I think they are implied by things he said or related ideas.

Mass emigration into space requires millions of rockets per second. What about other technologies for space emigration? Mass emigration into space would bring economies of scale anyway, rockets would start carrying more people.

Uncontrolled birth rates lead to horribly increased death rates (meaning diminished life expectancy and quality of life). As long as everyone who is born dies, of course death rates will increase, but what I take this to mean is that life expectancy and quality of life will diminish. What about other technological solutions to the problems of increasing population, such as more efficient food production? Lets face it, there’s still an awful lot of unused land, and that’s ignoring the fact that 2/3rds of the earths surface is water. We have a long way to go before we run out of space and resources (Africa and South America currently produce less than 1 percent of their potential agricultural harvest) on Earth, and necessity is the mother of invention.

‘Natural’ contraceptive methods are not ‘effective contraceptive methods’ Three natural methods have a Pearl Index of less than 1, compare this with the condom – Pearl Index of 2 to 5. Not to mention complete abstinence with a Pearl Index of 0

Not using ‘effective contraceptive methods’ (ie not the natural ones) results in uncontrolled birth rates. There are a few ways of controlling birth rates without using contraception of which the most obvious is abstinence. Perhaps not practical, but he seems to be talking about people with beliefs strong enough to cause them to do impractical things anyway. Beyond that, fertility rates can decrease from factors such as stress, pollution, alcohol and other drugs, and poor nutrion, let alone mobile phone usage. These factors would be likely to increase as population does. There exist some external controls to birth rate beyond just contraception.

Catholics express a preference for people starving.
Catholics are the only group of people I know that ‘forbid’ contraception, and they certainly do not advocate uncontrolled birth rates, millions of rockets per second being fired into space or starvation of millions.

Not using contraception causes starvation. Completely undemonstrated. It’d be much easy to show a simpler contention like eating more than you need causes starvation. In fact, “that people go malnourished is largely a political problem and not an agricultural one.”

Enough people in the world follow forbidders of contraception to cause a worldwide problem for everyone. Considering that not all followers obey all ‘forbiddings’ and that there are very few forbidders anyway, this seems unlikely.

The individuals who don’t agree with using contraception should for the benefit of humanity. This sounds a lot like he’s arguing for them to display altruism. Perhaps he’s worried that by natural selection, the number of people following forbidders will increase. But does he really believe that they should they go against their genetic programming and suffer decreased chances of their genes being passed on for the benefit of others?


“It is a simple logical truth that…” ugggh