.comment-link {margin-left:.6em;}

Prior Knowledge

Friday, January 28, 2005

Ideal Decisions

To get the ball rolling, here's something I just posted to my other blog. It's nothing too amazing, but perhaps it will encourage others to post something better :)

----

Peter Railton ('Moral Realism' in Facts, Values, and Norms, p.12) writes:
Suppose that one desires X, but wonders whether X really is part of one's good. This puzzlement typically arises because one feels that one knows too little about X, oneself, or one's world, or because one senses that one is not being adequately rational or reflective in assessing the information one has...

I think it's plausible that ideal agent theories identify our self-interest. That is, the choice I would make if I were ideally rational and fully informed, etc., is probably the choice that is best for me. But it may be helpful to raise a variant of the old Euthyphro dilemma, and ask: Is X in my best interests because my idealized self would choose it, or would he choose it because it is in my best interests?

I think the answer is clearly the latter. But that then suggests that the reason why I should X is not just that my ideal self would choose it. Rather, the real reason must be whatever was behind my ideal self's choice. My (normative) reasons are his (descriptive) reasons, in other words.

So I'm now wondering: what would those reasons be? In particular, I wonder whether they would simply reduce to the desire-fulfillment theory of self-interest that I've previously advocated. That is, what's good for us is for our strongest desires to be fulfilled in objective fact. The 'ideal agent' heuristic just serves to rule out any subjective mistakes we might make, such as falsely believing that Y would fulfill our desires.

Do you agree with this reduction, or do you think your idealized self might want you to value strikingly different things from what you do in fact value?

From my earlier post on ideal agent theories:
One way to think of this would be to consider A as temporarily gaining full cognitive powers (i.e. turning into A+), and being frozen in a moment of time until he makes a decision, whilst knowing that the moment the decision is made, he will be turned back into A. This ensures that A+ has motivation to seek what is in A's genuine interest, even in those cases when the apparent interests of A and A+ would otherwise diverge.

Can you imagine being in A+'s position here, and choosing to do something other than what would best fulfill A's desires? I'm not sure I can. [Recall that A+ is perfectly rational.] I just don't know what it would be for something entirely undesired (nor indirectly fulfilling other desires) to be in A's "interests". But those who don't subscribe to a desire-fulfillment theory of value must be imagining something like this. So I'd very much like to hear what it is.