Skip to content

The Simulation Argument

September 13, 2010

After a rather prolonged absence due to a combination of travel and travel-induced illness, I thought it was about time to start posting again. Apologies in advance for the poor presentation of mathematical formulae.

Last week I came across an interesting argument that I hadn’t heard of before: Bostrom’s Simulation Argument (“SA”). SA looks at the idea that what we perceive as our own consciousness is in fact just the result of an incredibly advanced computer simulation. In particular, this simulation is an ancestor simulation being run by the (presumably) more technologically advanced civilisation that humans will become should they survive long enough.

An obvious first step in such an argument is to consider whether or not technology to run such simulations is even possible. Bostrom considers this in his paper and concludes that it will be possible, even with large margins for error in his assumptions. Having established this point he moves on to the core of SA.

First up, some definitions. Let f_p be the fraction of human level technological civilisations that survive to reach a point where they are capable of running mind simulations (what Bostrom calls “posthuman”). Let N be the average number of ancestor simulations run by such a posthuman civilisation, and H be the average number of individuals who lived in the civilisation before it reached the posthuman stage. The total number of minds that are part of a simulation is then the product of these three quantities.

We can now write the fraction of minds that are part of a simulation (f_sim) as f_sim = (f_p * N * H) / (f_p * N * H + H). In order to continue the argument, Bostrom now turns to looking at whether or not posthuman civilisations are actually interested in running ancestor simulations. Let f_i be the fraction of posthuman civilisations that are interested, and N_i be the average number of ancestor simulations run by such a civilisation. We can now rewrite N as N = f_i * N_i.

Given the above (and dividing top and bottom of the fraction by H), we now have f_sim = (f_p * f_i * N_i) / (f_p * f_i * N_i + 1).

Bostrom now concludes his argument by saying that because of the immense computing power available to a posthuman civilisation that is interested in running ancestor simulations, N_i must be very large. It is therefore the case that at least one of the following must be true:

f_sim ≈ 1 (almost all “minds” are actually part of an ancestor simulation)

f_p ≈ 0 (almost no civilisations reach posthuman stages)

f_i ≈ 0 (almost no posthuman civilisations are interested in running ancestor simulations)

That is, we should rationally believe that at least one of the following statements is true: our civilisation will never reach a state in which it has the technology to run ancestor simulations, our civilisation will never reach a state in which it is interested in doing so, or our experience of consciousness is the result of an ancestor simulation.

There are some problems with this argument. Firstly, it is highly dependent on the idea that the ability to model consciousness is function of the computing power available to us. Bostrom gives an explanation of why he considers this to be a non-issue, but I’m not entirely comfortable with it yet. Secondly, the assumption that N_i must be large. To quote Bostrom in his section IV:

Because of the immense computing power of posthuman civilizations,  is extremely large, as we saw in the previous section.

I encourage you to read the paper; I don’t believe that he does in fact show this. He shows the ability to run incredibly large numbers of ancestor simulations, rather than the desire to run them. There is no argument to show that a posthuman civilisation interested in running ancestor simulations will in fact run lots of them. N_i need not be large, hence the conclusion doesn’t follow.

The SA has some interesting implications, which I’ll try to address in my next post… It’s time for coffee.

2 Comments leave one →
  1. Justin permalink
    September 13, 2010 1:35 PM

    Great post Christian. I am glad to see that you are taking up the Simulation Argument. Bostrum just seems to be providing a model for how Cartesian skepticism could naturally arise in a world with humans and technology.

    I know we said that one of the things this blog is trying to weasel around is the materialist-skeptic (If you can’t be provably certain, then it is anyone’s game). If the world we observe doesn’t have any physical basis (or is far removed as in the case of Bostrum’s SA) then the scientist has no special access to the Truth and thus we have no reason to criticize “irrational” thinking.

    I now remember that I had once upon a time resolved this apparent conflict with myself as follows:

    1. The world may not have an existence independent of the human mind, or any system of belief that depends on certain entities existing separate from me, the thinker.
    2. Statement 1 is actually a non-issue since I still have these flickering perceptions. I can stop believing in them, but they don’t go away.
    3. I must coexist with my perceptions and in trial-and-error fashion discover that they respond to my actions. I begin to experiment and predict the behavior of my observations.
    4. My situation is equivalent to one where I had never doubted the independent existence of my observations.

    This is a silly argument, but it does suggest that even though Cartesian skepticism deprives existence of any certain meaning we can just define “exists” as “I consistently observe.”

    Of course if we become unplugged from the Matrix we would start to consistently observe a different reality. We would then have to speculate how the old and new realities stand with respect to one another.

  2. September 13, 2010 2:14 PM

    One of the reasons that I’m drawn to looking at SA is exactly that it could make beliefs that I would call irrational suddenly become rational. I’m going to write about this some more tomorrow morning, so will hold off on more detail for now…

    One point though; Bostrum doesn’t show that we “live” in a simulation, rather he shows that believing that we do isn’t irrational. Bostrum himself apportions equal probabilities to the three possibilities raised. This is an important distinction that I didn’t necessarily pay attention to when i was initially thinking about this argument.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: