# The Mind and I

Last week I started reading *Shadows of the Mind* by Roger Penrose. It looks at whether or not AI will ever be able to be “aware” or “concious” and is a fun look at AI, maths, physics and philosophy It has helped shape my thinking about a few issues I had with the human mind. So, here’s a short summary of what Penrose has said so far.

There are 4 options when it comes to the human mind and the ability to recreate it in a computer:

- The mind and awareness are the result of physical processes and these can be modelled, with awareness arising in these models;
- The mind and awareness are the result of physical processes that can be modelled, but these models will not be aware;
- The mind and awareness are the result of physical processes that cannot be modelled; or
- The mind and awareness are not the result of physical processes and cannot be modelled.

Options 1 and 4 are the extremes; pure determinism verses an option that probably requires some “divine” intervention in order for us to be concious beings. Penrose is quick to dismiss these options, and his own position is option 3. *Shadows of the Mind* argues that there is something fundamentally missing from science that is needed in order of AI to fully recreate the human concept of conciousness. As a first step in this argument, Penrose gives an interesting variant of Gödel’s incompleteness theorem (Carl, see below for my attempt at explaining this more clearly than I did the other day). His variant concludes with the statement:

Humans do not use a knowably sound algorithm to derive mathematical truths

By *sound* we mean that the algorithm in question is always right, such that if it returns an answer we know it is correct. According to the statement above, if the algorithm we use to derive mathematical truths is sound, we will never be able to know this for certain. Where Penrose will go from here I don’t know, but it’s an interesting standalone statement.

One interesting section of the book has shown me something I’m sure I must have seen before but had completely forgotten, and goes some way towards alleviating an issue I’d been having recently. Being of a naturalistic persuasion, I find it hard to argue that the mind is made up of anything that isn’t explainable in terms of some collection of physical processes, which in turn seems to lead inexorably to the view that the mind is controlled by deterministic processes (however ridiculously complicated these processes must be!). What was causing me trouble was the notion that this might mean that. if our minds are deterministic, it could be possible to know what I would do/say/think in a given situation, taking away the notion of free will (which I’m pretty sure I remember reading recently is regarded as an illusion by some philosophers).

Penrose gives examples of deterministic systems that are not computable. For instance, consider a system defined by a sequence of tiles. These tiles form a sequence S_i, where S_i is a tile made up of i squares touching edge to edge in some arrangement (it’s easy to think of some rule for determining what exact arrangement these squares take in each S_i). We now define the rules governing the evolution of the system:

- If the tile corresponding to state S_n can tile the infinite plane without overlaps or gaps, move to state S_n+1
- If the tile corresponding to state S_n cannot tile the plane as above, move to state S_n+2

It is a theorem that there does not exist an algorithm that will tell us whether or nor an arbitrary tile as defined above will tile the infinite plane without overlaps or gaps. The system above is therefore deterministic (the tiles either can or can’t tile the infinite plane, there’s only one way the system can evolve) but uncomputable (we have no way in general of knowing what the system will do).

This is pretty comforting to me – I may have a deterministic mind, but at least I may also still have the illusion of free will taking me into an unknown future state.

—————————————

[Brief description of my understanding of Gödel’s Incompleteness Theorem… If I’m wrong, someone correct me]

A *formal system* is a language and a system of rules for deriving statements in this language from one or more premises that we take to be true, be they axioms or other statements made within the formal system. Given a statement in a formal system, it may fall into a number of sets. It could be in the set TRUE, made of statements P derivable using the rules of the formal system. It may be in the set FALSE, statements Q for which the statement “not Q” is in the set TRUE. It may fall in the set UNDECIDABLE, statements which are not in either the TRUE or FALSE sets. A formal system in which there are no statements in the set UNDECIDABLE is called *complete*. A formal system for which there are no statements that are in both the set TRUE and the set FALSE is called *consistent*.

Gödel proved that any formal system cannot be both complete and consistent – that is, we either have a statement that is true and false, or a statement that is neither. Either way, this was a pretty big blow to the foundations of mathematics!

Ahhh, Penrose’s take on consciousness. I remember when I started reading this book around 4 or 5 years ago (haven’t finished it) and I went to Gerald Sussman (progeny of Marvin Minsky, founder of MIT’s AI lab) to chat. From what I remember he said something along the lines of “Roger is a brilliant man who knows his math[s] and physics, but his understanding of the mind is not to be trusted.”

In particular, Penrose relies on the quantum effects going on in microtubules in nerve cells to create the un-model-able mind he’s after. First of all, whether quantum effects are really necessary to understand the emergent complexity of our neural systems is questionable. Secondly, even if quantum effects were necessary then a quantum computer would be capably of modeling. Finally, (and I forget to what extent Penrose addresses this) we have to consider what exactly we expect from having our computers simulate consciousness. An essential ingredient is a vast amount of data from the outside world. Consciousness is most-likely a simple pattern detection algorithm, iteratively applied to super-vast data sets (think terabytes per minute).

I think I understand why I have stopped worrying about determinism and it’s interaction with free will.

Physical systems are robust against themselves.Ignoring quantum effects, all physical processes can be described by the deterministic time-evolution of some Lagrangian. For even the fluid motion described inside a single neve cell this would amount to describing the solutions to some 10^20 coupled differential equations, which, by the way, are chaotic and consequently even the tiniest infinitesimal perturbation would amount to exponentially divergent behaviors. The idea that thought relies on these deterministic processes seems absurd. More likely is that the extremely condense and accurate stochastic/probabilistic approximation is all that matters for describing the behavior of the system. Once probability plays a strong role in thought it should not be surprising that “Humans do not use a knowably sound algorithm to derive mathematical truths.” No Incompleteness necessary.Yes… One of the problems I have with reading Penrose is, as you say, the problems when he strays away from maths and physics. When he’s talking about these he’s great, and has made some fairly complex ideas understandable for me. But even he seems to know that when he starts talking about consciousness he’s on thin ice. His books

The Emperors New Mind(which I read a long time ago) andShadows of the Mindare full of caveats and phrasing that make it pretty obvious that he’s not in territory he’s fully confident of. My problem is I haven’t found many other people that combine the maths/physics/mind in such an accessible way. All that said, hisRoad to Realityis a good read so far, mainly because it’s only maths and physics.