Ideal Decisions
To get the ball rolling, here's something I just posted to my other blog. It's nothing too amazing, but perhaps it will encourage others to post something better :)
----
Peter Railton ('Moral Realism' in Facts, Values, and Norms, p.12) writes:
I think it's plausible that ideal agent theories identify our self-interest. That is, the choice I would make if I were ideally rational and fully informed, etc., is probably the choice that is best for me. But it may be helpful to raise a variant of the old Euthyphro dilemma, and ask: Is X in my best interests because my idealized self would choose it, or would he choose it because it is in my best interests?
I think the answer is clearly the latter. But that then suggests that the reason why I should X is not just that my ideal self would choose it. Rather, the real reason must be whatever was behind my ideal self's choice. My (normative) reasons are his (descriptive) reasons, in other words.
So I'm now wondering: what would those reasons be? In particular, I wonder whether they would simply reduce to the desire-fulfillment theory of self-interest that I've previously advocated. That is, what's good for us is for our strongest desires to be fulfilled in objective fact. The 'ideal agent' heuristic just serves to rule out any subjective mistakes we might make, such as falsely believing that Y would fulfill our desires.
Do you agree with this reduction, or do you think your idealized self might want you to value strikingly different things from what you do in fact value?
From my earlier post on ideal agent theories:
Can you imagine being in A+'s position here, and choosing to do something other than what would best fulfill A's desires? I'm not sure I can. [Recall that A+ is perfectly rational.] I just don't know what it would be for something entirely undesired (nor indirectly fulfilling other desires) to be in A's "interests". But those who don't subscribe to a desire-fulfillment theory of value must be imagining something like this. So I'd very much like to hear what it is.
----
Peter Railton ('Moral Realism' in Facts, Values, and Norms, p.12) writes:
Suppose that one desires X, but wonders whether X really is part of one's good. This puzzlement typically arises because one feels that one knows too little about X, oneself, or one's world, or because one senses that one is not being adequately rational or reflective in assessing the information one has...
I think it's plausible that ideal agent theories identify our self-interest. That is, the choice I would make if I were ideally rational and fully informed, etc., is probably the choice that is best for me. But it may be helpful to raise a variant of the old Euthyphro dilemma, and ask: Is X in my best interests because my idealized self would choose it, or would he choose it because it is in my best interests?
I think the answer is clearly the latter. But that then suggests that the reason why I should X is not just that my ideal self would choose it. Rather, the real reason must be whatever was behind my ideal self's choice. My (normative) reasons are his (descriptive) reasons, in other words.
So I'm now wondering: what would those reasons be? In particular, I wonder whether they would simply reduce to the desire-fulfillment theory of self-interest that I've previously advocated. That is, what's good for us is for our strongest desires to be fulfilled in objective fact. The 'ideal agent' heuristic just serves to rule out any subjective mistakes we might make, such as falsely believing that Y would fulfill our desires.
Do you agree with this reduction, or do you think your idealized self might want you to value strikingly different things from what you do in fact value?
From my earlier post on ideal agent theories:
One way to think of this would be to consider A as temporarily gaining full cognitive powers (i.e. turning into A+), and being frozen in a moment of time until he makes a decision, whilst knowing that the moment the decision is made, he will be turned back into A. This ensures that A+ has motivation to seek what is in A's genuine interest, even in those cases when the apparent interests of A and A+ would otherwise diverge.
Can you imagine being in A+'s position here, and choosing to do something other than what would best fulfill A's desires? I'm not sure I can. [Recall that A+ is perfectly rational.] I just don't know what it would be for something entirely undesired (nor indirectly fulfilling other desires) to be in A's "interests". But those who don't subscribe to a desire-fulfillment theory of value must be imagining something like this. So I'd very much like to hear what it is.
2 Comments:
"Can you imagine being in A+'s position here, and choosing to do something other than what would best fulfill A's desires? I'm not sure I can. [Recall that A+ is perfectly rational.] I just don't know what it would be for something entirely undesired (nor indirectly fulfilling other desires) to be in A's "interests". But those who don't subscribe to a desire-fulfillment theory of value must be imagining something like this. So I'd very much like to hear what it is."
Hi.
I tried to think of a counterexample, so thought I would start off by making A an extreme racist. Perhaps A has a desire to see harm come to all people of some other race that is living within the same commmunity as A. Assuming, of course, that A is mistaken in believing that having this wish fulfilled will make him or her happier, then what is the problem? Wouldn't A+ decide that even though A wants to see others harmed, in that particular instance, it would not benefit him, and therefore choose to take a course of action opposed to the one A would make?
Of course, everyone has a desire for happiness (I think?). So it could be said that A+ is resolving a conflict between two desires, namely the desire for happiness, and the desire for harm towards the other race. Is this the reason why my example fails to be a counterexample? If so, then I doubt that A+ would ever choose to do something other than what would best fulfil any of A's desires. However, this may be simply because A and A+ will both want A to be ultimately happy... but if the desire-fulfilment theory rests upon this too often, then doesn't it just reduce down to some kind of happyness theory of value?
I think the crux of my argument probably falls upon the point that A+ would surely drop some of A's desires and perhaps gain some new ones, due to the leap in cognitive powers.
I haven't actually read more than this blog, so I wholeheartedly apologise if this argument of mine has already been dealt with a long time ago! Of course, I could just be talking rubbish, and you may never have seen an argument resembling anything near it.
Posted by Patrick
2/04/2005 12:11:00 AM
Hey Patrick,
One disagreement I have is that I think we can desire something simply as an end-in-itself, rather than it necessarily being just a means to our happiness. (If you're interested, I argue for this claim in that last linked-to post on 'desire fulfillment'.)
So if A really does desire that others are harmed, I'm not sure why A+ would disagree with him. I take it that being more rational and knowledgable simply makes A+ better able to achieve A's ultimate ends (i.e. desires). I'm not sure how it could change A's ultimate ends. (Put another way: evil geniuses aren't necessarily 'irrational'. They just aren't nice. Improved rationality in no way guarantees improved morality.)
Though I think what you're getting at here is that A's racism is getting in the way of his other desires (including his general happiness, perhaps). That's certainly plausible, but - as you noted - if understood this way, then A+'s decision is simply a case of 'conflict resolution' geared towards maximizing A's desires.
I think, to be a counterexample, we would need a case where A+ wants something for A that A himself does not desire. That is, we need the boost in cognitive powers to somehow alter A's ultimate ends (e.g. to make him more moral). But it's difficult to see how this is possible, at least given an 'instrumental' understanding of rationality (i.e. as finding the means to a presupposed end, rather than providing the ends themselves).
[Having said that, there was an interesting proposal to this effect made in the comments at my other blog - the core idea being that increased knowledge may open A+ up to the possiblity of new ultimate ends that A had never had a chance to consider.]
P.S. Thanks for the thoughtful comment - it's good to see some discussion beginning to develop on this blog!
Posted by Richard
2/04/2005 12:39:00 AM
Add a comment
<< Home