Mind Upload II

Let’s suppose that ASI has arrived and mind-upload technology has been developed. Assume one has a large compute budget and the capability to make copies of one’s mind which are psychologically continuous with oneself at the moment of brain-scanning. How should one then use one’s compute budget?

By “psychologically continuous”, I mean that each copy is just as much oneself as the original, biological self is. The assumption here is that computation is sufficient to capture human consciousness, so the conscious experience of people running on artificial substrates is identical to biological humans’, and one’s copies are not in any way distinguishable from that of the original.

Suppose that one undergoes brain-scanning, then spins up several copies of oneself based on that scan (each then being placed in distinct environments). Because computation is sufficient to fully capture one’s consciousness, whether that computation is occurring within one’s biological body or within an artificial computer is immaterial to one’s subjective experience. Therefore, each continuation, artificial or biological, from the point of scanning is equally oneself, in terms of one’s own subjective experience. Of course, one cannot subjectively experience all of them at once. There will be various streams of consciousness associated with each one, and which stream of consciousness one ends up in will be a matter of chance.

So, we can think of the moment of mind upload as a kind of gamble where, all other factors being equal, one has an equal chance of finding oneself as each of the psychological continuities. If some of these continuities are more fortunate than others, one would hope that one would end up as a more fortunate one rather than a less fortunate one. Therefore we can treat it as we treat gambling games in real life: a time to maximise expected value.

Perhaps one should simply instruct ASI to create one copy and give to that copy the ultimate personal utopia, and terminate one’s biological self. But, we need to be careful here. There are the issues of value-drift and some potential issues when it comes to psychological continuity which need to be addressed.

Value-drift: one’s copies will be living for a vast number of subjective years and over time their character will naturally evolve. Their ideas and beliefs will evolve such that they bear no discernible relation to those originally held, and old memories will be forgotten. In fact, one’s uploaded life will be so much richer than one’s pre-upload life that one may be eager to forget it. The things one held dear at the moment of upload, the copy would have almost-entirely forgotten by perhaps its 10,000th subjective year. Then for the decillions (say) of subjective years the copy has yet to live out, it will be something effectively entirely separate from oneself at the point of brain-scanning. So one would be using the vast majority of one’s compute budget simulating someone that is effectively a stranger.

This is not a wise move: it’s neither in one’s self-interest nor is there any moral imperative to do it. There is an argument that it is in one’s interests to propagate a more-evolved version of one’s moral principles, and that although one will not oneself benefit from the wellbeing of this stranger, the fact that the stranger has evolved from one means the stranger is likely carrying out a morally-superior version of one’s own belief system. This argument implies that it’s a good thing the stranger replaced one. But this is nonsensical: Once one enters one’s personal virtual utopia one is no longer a moral agent. The other inhabitants of one’s world are non-sentient; one’s behaviour has no moral implications external to one’s self.

One could say: but don’t we then owe at least a moral obligation to the self? No more than we are any other potential person, and in fact less so than infinitely many different people we could simulate. Indeed, if one is concerned with spending one’s compute altruistically, one certainly oughtn’t spend it on some evolved version of oneself: whatever one values, it can be achieved much more efficiently by designing a simulation from scratch, rather than basing it on oneself and one’s own utopia. So, we are interested in how one should spend one’s compute budget selfishly, because to whatever extent one wants to be altruistic, one should just donate a proportionate amount of one's compute budget to an ASI-run programme.

So simply running an immortal version of oneself indefinitely seems like a bad idea. But what about spinning up very many instantiations of oneself, each of whom very much shares one’s identity; is relatably oneself. One could have each of these continuations live one of a diverse range of lives, so that all the potentialities of one’s personality could be realised. But, no: remember that this is a gambling game. Our goal is to maximise our EV. And how does one actually benefit from these diverse self-realisations? One doesn’t. One just hopes one ends up in one of the more favourable continuations. All but the highest EV continuations just drag down one’s expected utility, so isn’t in one’s self-interest to spin up, so shouldn’t be spun up at all. So, what if one uses one’s compute budget to create as many instantiations as possible of the highest-EV continuation? But this doesn’t actually increase one’s EV at all. One can only experience one continuation, so one’s expected value is equal whether one spins up one instantiation of a given continuation or an undecillion (one is maximising average EV of one’s continuations, not one's overall EV).

So, what one should do is to instruct one’s ASI world-curator to alter one’s mind so that one will retain identity with oneself at the point of upload. It could grant one a more capacious memory, a more rigid personality, place one in a world more deeply-rooted in one’s pre-ASI history than would strictly maximise value, etc. Of course, altering one’s nature is a dangerous game, liable to result in exactly the opposite of our goal: i.e., losing one’s identity by losing one’s human nature, rather than protecting identity. So, one’s ASI-curator will ensure that while the functions of one’s mind are altered, one’s identity is preserved. Of course, one still wants to be able to develop oneself, learn new skills, etc.: to fully experience the life of the mind. This balancing act is the sort of task that is appropriate for ASI.

So, we have resolved the first of our two concerns: value-drift. One gets ASI to design one a new brain which prioritises the preservation of a relatable self. Onto our second concern: one relating to the nature of psychological continuity. Recall that one is to terminate one’s biological self to maximise one’s expected utility. But what if conscious experience continues for some non-zero period of time after the point at which it is scanned? This seems likely. Events tend to take time. So, does this mean that there’s a 50% chance that one will end up in a continuation in which one instantly dies? If so, doesn’t that bring down one’s EV even more than just living out a not-quite-utopic biological life? But this isn’t the case. To demonstrate this we first need to draw a distinction between psychological continuity and conscious continuity.

Consider sleep. When one goes to sleep then wakes up, one’s conscious state is significantly shifted (and perhaps one’s continuity of consciousness is broken), but psychological continuity is preserved. Similarly, if one’s mind simply ceased to exist between the moment one fell asleep and the moment one woke up, psychological continuity would be preserved. It certainly wouldn’t be anything like (permanent) death. But there would be no continuity of consciousness—the one conscious state did not follow directly, computationally from the other.

One doesn’t mind going to sleep. And this example in which conscious continuity is broken is subjectively indistinguishable (roughly) from going to sleep. So, conscious continuity is not what one values. One values psychological continuity. If one loses that, one dies. So, when one’s biological self persists for some short time after scanning, having found oneself in that conscious continuation will not be a big deal—it’s just like losing consciousness, which we’ve demonstrated is not a major concern. One will still be psychologically continuous with one’s upload.

So, we have shown that one should scan one’s mind, then terminate one’s biological self and spin up exactly one upload, modified so that one’s identity is preserved over time. This gives us a clearer view of life after ASI. I think this is a nice intersection of philosophy and forecasting.