Astonishing AI

Charles Moir
7 min readMay 9, 2023

--

Our mechanical brain. Courtesy of Bing / DALL-E

You can’t have failed to notice that something astonishing is happening in AI. There’s a huge amount of hype, but some AI ‘experts’ are claiming it’s nothing major, it’s fake the machines are not really smart. They are nothing more than ‘stochastic parrots’.

‘They can’t be intelligent.

I think they are wrong.

When I was much, much younger, it occurred to me that we (that is, our minds) might just be pattern-matching machines.

Being an (old) computer geek, I’ve had a keen interest in AI developments. When I first learned, I think it was in the 1980s, about Neural Networks a lot of things fell into place. Artificial Neural Networks are a software-based emulation of the network of neurons and synapses in your brain. Only much simpler and a tiny, tiny fraction of the billions of neurons that exist in your brain.

Neural networks turned out to be fantastic at recognising things. In the early days, it was simple things like handwritten letter shapes. Later more advanced things — objects like, say, cats in pictures.

The fundamental difference between a Neural Network and any conventional software program is that Neural Networks are not programmed in a conventional way — they are not a program that follows a set of human-created instructions. They are more like blank-slate learning machines. Like a child’s mind, if you will. They learn by example.

In the 80s and 90s the first ‘AI’ systems, prior to the use of Neural Networks were called ‘expert systems’. Their approach was to manually program the rules of whatever system they were supposed to be an expert on.

At the time, it seemed to me, this was a hopelessly wrong approach. You can have a perfectly intelligent conversation in English with a child. They will, mostly, follow the rules of English grammar, without actually knowing the rules of grammar. The idea of trying to program the rules of English grammar or, say, English pronunciation into an algorithm seemed hopeless, given just how many exceptions there are to the rules. And as any foreign speaker will tell you English pronunciation is so unpredictable, there are hardly any rules to follow.

A child can catch a ball without having a clue as to the maths and physics that must be happening in the brain and muscles. In the early days robots were programmed with the rules of physics — and after a huge amount of effort, some could, just about, catch a ball. But I always thought how wrong this approach was (even animals, can quite easily be trained to catch balls).

But even after the many advances in Neural Networks over the last few decades, if you asked me, when will we get a machine that you could have a conversation with? That could pass the Turing test. I would have answered ‘decades away’. And so would most artificial intelligence researchers.

But recent developments have been astonishing. Mind-blowing. Computers are not only able to hold intelligent conversations with people, but they can create poetry and artwork that are incredible. And most impressive of all — they appear to be able to reason — to use logic to deduce answers that most people would say was the exclusive realm of the human mind.

And modern AIs can be creative.

A child’s drawing of the moon and the stars. Bing / DALL-E

If you asked a child to write a short story about the moon and stars, you would be astonished to read this:

Write a short story about the moon and the stars:

Once upon a time, the moon and the stars lived together in the sky. They were the best of friends, and they loved spending their time together. The moon would light up the sky at night, and the stars would twinkle and shine. They were always happy and they never fought.

But one day, the moon and the stars began to grow apart. The moon started to feel left out and the stars began to ignore her. The moon didn’t understand what was happening, and she felt very lonely.

One night, the moon decided to take a walk through the sky. She passed by all the other planets and stars, but they all just ignored her. The moon began to cry and she felt more alone than ever.

Suddenly, the moon heard a voice. It was the sun. He had been watching the moon from a distance and he had seen how sad she was. The sun told the moon that he loved her and that he would always be there for her. The moon was so happy to hear this, and she felt much better.

The moon and the sun went back to the sky and they were finally able to be happy again.

Maybe you’d be even more impressed with the accuracy of the spelling and grammar.

You’d quite likely claim the kid that wrote that ‘has some imagination’. Yes — a computer showing what we would term imagination and creativity, as well as fantastic English skills.

If you’d shown me this just a year or so ago, I would not have believed this possible. But it’s for real with the latest version of the program, ChatGTP, getting crazy amounts of hype — you can see why.

But it goes much deeper. I reckon the following is beyond what most humans could manage. Could you write a description, of such an utterly absurd idea, in the style of the King James Bible?

I don’t know if ChatGTP knows this is funny — I suspect it does.

There’s a sandwich in my VCR.

So this machine is now super-human. Better than most humans at creative, language tasks.

But linguists and some cognitive scientists are claiming it’s all fake. The machine doesn’t really understand what it’s saying. They are just ‘stochastic parrots’. They say these machines can’t, for example, understand the ‘true nature of a dog’ when they see the word dog. The machines can’t understand the ‘dogness’ of a dog.

I think they are all wrong. They assume we are more than dumb stochastic parrots — there’s some special unknowable complex magic going on that makes us smart.

I don’t agree. I think we are just giant neural networks, enormously capable pattern-matching machines.

And what’s the evidence that we are just dumb neural networks, pattern-matching machines, that re-hash and regurgitate patterns we’ve been trained on?

Well, apart from the fact that our brains are one pretty homogenous lump of gloop made from billions of neurons and trillions of connections. The evidence is that neural networks work fantastically well at all the ‘computationally difficult’ things the brain does, such as recognise objects, and understand speech — and can now speak as well as humans.

The evidence is ChatGTP and similar giant artificial neural networks. These increasingly large neural networks are trained on vast amounts of text that are now producing very, very human-like intelligence. Within the last year, these machines have been scaled up, using the latest insanely powerful processors designed specifically for running neural networks, along with refined reenforcement-learning techniques. And something very unexpected happened. To the surprise of almost all researchers in the area, these latest machines are showing emergent behaviour that was not predicted to happen.

To many, mostly old, AI researchers, neural networks were the wrong approach. They have proposed techniques such as ‘symbolic AI’. Some claim even that the mind can only be explained by quantum mechanics — the quantum mind theory. I predict these theories are not long for the world. As Max Plank once suggested:

“A great scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die”.

The irony is that Max Planck was the father of Quantum Theory.

But why is it a surprise that creating a large enough artificial neural network starts to act like the human brain — itself a giant neural network running on the wetware inside your head?

The critics say ChatGPT just spews words out based on statistically the most likely next word. ‘That is not thinking’, they say. Whilst true large language models such as ChatGTP do work this way, this is just another way of saying it pattern matches a chain of words.

This is exactly what I believe we do when speaking, and more so, when solving problems, when reasoning, and when thinking.

Chain of Thought

There’s a very good reason this phrase is used by people to describe the thinking process. And ChatGPT is excellent at explaining its own chain of thought when reasoning.

When we speak we process a chain of words, one following another, that gets our point across.

There’s a theory that our intelligence directly results from our language abilities. (Also see this.) After all, we’re the only species with such a rich vocabulary and complex, deep, creative language abilities.

So maybe our mind, our thoughts, our so-called free will, our ability to reason, be creative, solve problems, feel emotion, and empathise — it’s all nothing more than the random firings of patterns in our brain. Thoughts that trigger related thoughts (similar patterns) in a sequence. To me, that seems very much like what ChatGPT is doing.

The progress of ChatGPT and its astonishing human-like abilities suggest this may be so.

--

--

Charles Moir

A geek who made good. Started writing machine code, created one of the first word processors. Founder of Xara and Xara Networks (now GX Networks).