03 février 2012

Some more problems about identity, survival, and stuff



Un post entièrement en anglais :

I just discovered this blog : http://philosophyandpsychology.com/, and I wanted to comment on two posts, but I somehow got carried away, and it got so long that it's better suited here.

I'm commenting, first this post :
http://philosophyandpsychology.com/?p=1875 "Why you can't be harmed by your own death"
Then this one : http://philosophyandpsychology.com/?p=1867 "Some thoughts on why I would kill myself in order to teleport"

About the first one :
I must say I am myself very fond of the Epicurus's thesis you point out, to which I've been committed since the first time I read it. However, I'd have two comments, because I feel that your commitment to it is maybe a little too easy.

First, although I think you are right to distinguish between the hedonist claim and the claim that the existence of the subject is required for something to harm it, I don't think you can really (and neither can I) deal with the infantile adult case. For, though a subject indeed still exists, it is doubtful that it is the same subject as before, if he/she has really been lobotomized. I'm committed here (and I think you're bound to be so) with David Lewis's thesis in Identity and Survival : if you think that what constitutes the subject to be harmed is basically mental states (may them be unconscious), then you have to hold that the existence of one specific subject through time is isomorphic with the continuity and the inter-relatedness between its mental states. Therefore, i think you should claim that no harm is made to the former subject in the very existence of the lobotomized latter. The case is really the same as death : the "harm" is only objective, from a viewer's, or God's point of view, or subjective for relatives and friends or for an irrational subject, in the past, considering the future. If there is no way for any previous perspective of the subject to connect with the present impoverished mental states, I can't see how you could hold that a subject is being harmed. And it is almost the same if the person still remembers the previous perspectives, but cannot feel them as his/hers. The immediate problem is that, although I maintain it would be an objective loss, it becomes very difficult to make things matter every time a personality change is at stake.

Second, I'd turn to the objective side of the argument. My point concerns also the holders of the “Deprivation thesis”, and is inspired by Meillassoux's perspectives in Spectral Dilemma and The Divine Inexistence : if death possibly is, in any circumstance, a loss, then it should be a loss in every circumstance, always and everywhere. In fact, every death is arguably an early death, and if Susie's early death is tragic, then every death should also be tragic. I mean that if the problem in death isn't subjective, then it's tragicness proliferates. For example, you made the point that “according to one version of the Deprivation Thesis, we can determine that Susie’s death was bad for her by making a simple comparison between what happened in her actual life (early death) and what happened in a near possible world (long life). Since she might have experienced 50 more years of well-being in the near possible world, Susie’s early death is bad for her because it deprived her of all that well-being”. And you seem to maintain that although not subjectively for Susie, it still might be objectively true. But then again, there is also a possible world where Susie lived 100 years longer, 1000 years longer, or for ever, in well-being or whatever state you think is worthy of humanity (morality, contemplation, etc.). You may answer that we shouldn't worry about that because in those worlds the laws of ours don't apply, but I'd say : that's precisely the tragedy ! The event of her death occurred, instead of the event of laws' change. Let's put aside the general question of “loss of potential” (it would lead us too far to regret that a maximal sum of potentialities isn't realized in our world), and focus only on subject-related loss : a human person is arguably worthy of immortality since we cannot see a limit to what he/she is capable of, through an unlimited time.

I hope that I'm not talking too foolishly, and that I've made my points clear.

About the second post, I'd have two remarks, again.

First, the 99,999999999999999999... %. Why so many 9s ? Actually, the ellipsis suggests infinity and therefore the will for the machine to work 100% of the time ; but even without it, I believe you don't need so many 9s for the machine to be even safer than train or whatever way of transportation you want. And yet I feel you would indeed want quite a guarantee that you wouldn't be simply destroyed by the machine. I'm not sure why, maybe we're just not used to teleporters, or maybe it is somehow related to my second remark.

Secondly, then, I'm not sure about your justification of why not press the purple button, in opposition to the green one. I agree that you shouldn't press the purple button. I want to agree that you should press the green button, but I'm not sure about that either.
Let me explain myself: if the difference between the purple button and the green one was simply utility, then you'd expect pressing the green one to be pertinent (which it is) and pressing the purple one to be indifferent or pointless (which it is not). If it matters that much that you don't press it, if it would be suicide, but pressing the green button is ok, it must be that in one case someone is killed, and in the other case not. Let's reverse the situation. The voice doesn't address to the former you in Orlando but to the new you in L.A. and you/L.A. has the choice to press the purple button and destroy your old copy in Orlando. Wouldn't that be murder ? Murder, and not suicide. So the problem really is about who's who, and not utility. For pressing the green button to be useful, it has to be useful to you, and nobody else, and then you have to be sure that it is still you in L.A. But if we are in the situation you've described, you are you in Orlando, so what ?
Let's say now that the former you as only partially been destroyed, so that the living organism is still functioning, but the mind is completely gone. I believe it wouldn't be murder for you/L.A. to destroy that organism, although you'd feel it's not you. At this point, I'd like to evoke David Lewis's paper again : he would say that in the situation you've described, two people are at stake, sharing the same past, but living different mental lives. So of course the you/Orlando wouldn't want to kill himself, since he is not the same person as the you/LA who resembles him. And that's why it would be murder for you/L.A., to kill you/Orlando. In the third situation, there is no you/Orlando, so no question about that. But then comes a fourth situation : the former you, has not been destroyed at all, but what is called, in the year 2234, “ultrafrozen”, so much so that he didn't have a single brain, or physiological, reaction, since the process of teleportation has begun, but he still could be “hyperdefrosted” and live a normal life (this system aims to offer a guarantee in case something goes wrong during the atomic reassembling). Would it be murder to destroy it ? Would it be suicide to sign, before the teleportation, that you know such a frozen you is gonna be destroyed, and you still want to go (although obviously, even for teleportation, nobody would read the terms and conditions...) ? I think it's quite clear that destroying the ultrafrozen you, for you/L.A., wouldn't be suicide, even though you share the same name with that thing in Orlando. But destroying an organism that could be reanimated and live a life of its own, it does sound like murder. Personally, I'd say that, if teleportation isn't suicide altogether, then you should be able to indifferently destroy that ultrafrozen thing, because no other you has lived since the teleportation, and YOU are not this thing. But that teleportation isn't suicide isn't obvious to me.

Let's now take another situation (beware, it's gonna be severely fictionnal) : you discover that there is, in our world (and not another possible world) another dimension (and only one), in which there is another earth, exactly the same, with another you, exactly the same (atom by atom), but that one of the two is gonna be destroyed, only one and there's no way to predict which one, in a quantum-mecanic-like situation. The two of you are gonna stay exactly the same, have the same thoughts, the same fears, until the very moment it happens (I admit the situation isn't quite likely). If you think teleportation isn't suicide, shouldn't you think there is actually one person at stake, and not two ? You say : “The reassembled clone is atom-by-atom identical to the you that pressed the green button. You can’t get any better in terms of continuity of identity than an atom-by-atom preservation.“ But if you stand that identity is informational identity, or informational + contextual identity, then the two of you are even literally identical. I'm not sure, something like that. At least, you should think that no one of the two can really be killed, since he'll have an identical substitute, like in teleportation (although the family and friends of one of the two would indeed think, wrongly, that their beloved is dead). There is a difference between informational identity, or even complete inter-relatedness of mental states, and continuity in itself. It strangely seems to me that an individual that would have exactly the same memory as I would, had I survived, would not be me if a break of some sort had occurred.

Let's admit that position, for argument's sake. Let's suppose that teleportation is suicide, + the creation of some identical other person. Now, maybe what you wanted to say, was that teleportation is indeed suicide, but a useful suicide, to be opposed to the useless suicide of the purple button. And I'm not sure about that either, but I guess that, maybe, if you hold that death is harmless in a subjective point of view, like you do, since objectively no harm is done in teleportation, for an identical new person is created, then teleportation is ok. Suicide, but ok. (at least, no more harm is done than in no replicating a person when there's no use for it, no more potential is lost). But then, what about the purple button ? I'm not sure about the utility argument. It's arguable that you're relatives would freak if there were two “you”. Even the you/L.A would. So it would be useful to destroy yourself (and consider the legal issues in the duplication of you).

Please tell me where I'm wrong !

Aucun commentaire:

Enregistrer un commentaire