In addition to a pretense box, Stich and I propose a mechanism that supplies the pretense box with representations that initiate or embellish an episode of pretense, the “Script Elaborator”. This is required to explain the bizarre and creative elements that are evident in much pretend play. However, there are also much more staid and predictable elaborations in pretend play. This too is well illustrated by Leslie’s experiment. Virtually all of the children in his experiment responded the same way when asked to point to the “empty cup”. How are these orderly patterns to be explained? In everyday life when we acquire new beliefs, we routinely draw inferences and update our beliefs. No one knows how this process works, but no one disputes that it does work. There must be some set of mechanisms subserving inference and updating, and we can simply use another functional grouping to collect these mechanisms under the heading “Inference Mechanisms”. Now, to explain the orderly responses of the children in Leslie’s experiment, we propose that the representations in the pretense box are processed by the same inference mechanisms that operate over real beliefs. Of course, to draw these inferences the child must be able to use real world knowledge about the effects of gravity and so forth, and so Stich and I also suppose that the inferences the child makes during pretense can somehow draw on the child’s beliefs.
This is, I think, a fairly typical statement of one important respect in which belief is often said to be similar to imagination: each is subject to the same inference mechanisms. Nichols includes this chart:
Notice the 'inference mechanisms' that act on beliefs and imaginings alike.
Now I can see well enough that pretense and belief inferences tend to go in the same way. If I know full well that p only if q, and believe p, I'll often come to infer to a belief that q, just as, if I imagine p, I'll often come to infer to imagine q. (Modulo various familiar complications: sometimes I give up the previous belief, etc.) But doesn't just the same thing happen with desire? If I desire that p, and know full well that p only if q, I'll very often, through a very ordinary sort of means-end reasoning, come to desire that q, modulo various familiar complications like the possibility that I'll stop desiring p.
Take a background situation where I know that nothing funny is going on with the cups; gravity is normal, the water is liquid, etc.
Suppose I believe the cup had water in it and has been turned over. Then I'll believe that the cup is now empty.
Suppose I imagine or pretend that the cup had water in it and has been turned over. Then I'll imagine or pretend that the cup is now empty.
Suppose I desire that the cup had water in it and has now been turned over. Then I'll desire that the cup is now empty.
This suggests to me that the similarities between imagination and belief, in contrast with desire, are exaggerated by, e.g., the diagram above. Those inference mechanisms apply to desires just as well as to beliefs and pretenses. Are there similarities in inference mechanisms that distinguish beliefs and pretenses/imaginings from propositional attitudes more generally?
That desire works somewhat along those lines is part of the basic RTM package here. (I think an example of a kind of practical syllogism like that one is one of Fodor's standard illustrations.). But to test the matter of B/I/D similarities, you have to consider a wider range of cases than just that.ReplyDelete
For example, both B and I have processes operating on them along something like these lines (massively simplified and leaving out all the ceteris paribus machinery):
Input: P; The best explanation for P is Q.
But if you desire that P, and discover that the best explanation for P is (would be?) Q, that doesn't generally induce any conative attitude towards Q one way or another.
Input: P; Not(P&Q)
Works great in both I and B. But switching to D, we don't seem to have anything like that. I desire to be a professional philosopher; I believe that I can't both be a professional philosopher and filthy stinking rich; yet I still have the desire to be filthy stinking rich. There are mechanisms that try to harmonize the contents of the BB, or of the IB, but the DB is allowed to be full of contradictory states. Harmony in the DB is, rather, a matter of generating preferences that are transitive, etc. To believe what you know to be false is pathological; to desire what you know you cannot have is just the human situation.
Wait, this doesn't make any sense to me: "Suppose I desire that the cup had water in it and has now been turned over. Then I’ll desire that the cup is now empty.". Why would my desire change? Why wouldn't, on those suppositions, just get disappointed, and then form the new desires to turn the cup back over and to refill it?ReplyDelete
Jonathan, I'm confused.ReplyDelete
You give these two as candidates for I/O pairs that characterize beliefs and imaginings, but not desires:
(1) Input: P; The best explanation for P is Q. Output: Q
(2) Input: P; Not(P&Q). Output: Not-Q
Let's take them one at a time. Consider (1). I'm not sure exactly what the proper way to characterize the content of 'best explanation' is, but it either entails 'correct' or it doesn't. If 'best' doesn't entail 'correct', then I don't see that (1) characterizes imagination. I imagine that there's a surprising event whose true explanation isn't the best explanation; I don't imagine that best explanation to be true. (If this is right, I'm not sure (1) characterizes belief, either, but that's partly because I have a funny view about infallibilism.) And if 'best' does entail 'correct', then it looks like (1) should work fine for desire.
With respect to (2), I don't see any motivation at all for denying that it governs desire. I want to be awesome, but I don't want to be awesome and stinky, so I don't want to be stinky.
The example you give isn't of this form -- "I desire to be a professional philosopher; I believe that I can’t both be a professional philosopher and filthy stinking rich; yet I still have the desire to be filthy stinking rich." But that's just not an instance of the schema; we're characterizing inputs into desire.
With respect to your worry about my example, Jonathan, perhaps you're misreading 'had' for 'has'? I was imagining that the scenario that would satisfy my desires is one in which it used to have water in it, but has since been tipped over (and so now is empty).ReplyDelete
Ah, I had thought that relevant bits of your examples were beliefs, not all deaires. But the forma in question still won't work for all desires. For surely this is pointed in the wrong direction: "With respect to (2), I don’t see any motivation at all for denying that it governs desire. I want to be awesome, but I don’t want to be awesome and stinky, so I don’t want to be stinky." The reason I don't want to be awesome and stinky is, simply, I don't want to be stinky. If one considers cases in which the second conjunct doesn't already have such a conative status, they won't generally be ones where those inputs will result in that output. E.g., I desire to hold the Smarty McSmartypants chair of philosophy at Oxbridge (what self-respecting philosopher wouldn't?); and I definitely desire not (to hold that chair & to reside in Tucson next fall); who could live with such a commute? But I _do_ desire to live in Tucson next fall! And, I take it, there's nothing obviously pathological or incoherent about that set of desires, whereas a similar set of beliefs or imaginings would be incoherent.ReplyDelete
I guess your cup/water example, if I now understand the tenses and attitudes correctly, just doesn't make sense to me. Why would I have those initial desires unless they were in service of an already-existing desire to empty the cup? And if I just have a weird cup-tipping fetish or something like that, then why should those initial desires produce that third, cup-empty desire? (Maybe I would desire that it was already refilled, that I could immediately re-gratify my strange obsession)?ReplyDelete
These I/O relations are also generally ceteris paribus ones, so a handful of odd, self-referentially, etc counterexamples just aren't going to matter at all. Also, on reflection, it would be better to package what I used as a second premise into the production mechanism itself, not as a freestanding input. Basically: we do defeasible abduction in the BB, and in the IB, but not in the DB. That I desire p will typically have no bearing on my attitudes towards q,
where q is what would be the best explanation of p, were p to obtain.
Ok, this clarification is helpful, thanks. I think I have responses, but they'll take some work to develop, so I won't try to do it here just now.ReplyDelete
Are these moves and kinds of examples made in print anywhere you know of?
I find it useful in thinking about the "logic of desire"---and dealing with things like the conjunction-failures that Jonathan W mentions---to think about things in terms of expected utility in decision theory (compare Jonathan W pointing to the constraint that "preferences be transitive").
Think of decision theory as a simple model of rationality constraints between cardinal degrees of belief and desire; and then we can examine various combinations of attitudes to see if they're decision-theoretically coherent. So e.g., it's perfectly possible for p to have high expected utility (much higher than ~p)---like Jonathan W's attitude to living in Tucson next year; and also for something q to also have high expected utility---like getting the Smarty McSmartypants chair (also much higher than ~q)---but for the conjunction to have very low expected utility. And we can see that this will only happen where credences in p and q have particular patterns (e.g. the conjunction p&q is relatively low credence conditional on q).
As a limiting case, there'll be some rationality constraints purely on desire---things like that if you desire p, and desire q, you should desire p or q (if p is high expected utility, and so is q, then the disjunction is high too).
Anyway, I don't know whether this is helpful, but it does help me think about where these kind of counterexamples are coming from, what might underlie what strike us as generally decent inferences among desires, and what prospects there are for exceptionless pure inferences among desires (and it also suggests an alternative location for where inferential moves among desires may come from---from the practical reasoning/decision-making module, thought of as an implementation/control system for something like decision-theoretic rationality. Pure desire-based inferences might end up as epiphenomena of some mechanism whose primary job is to control how we mix belief and desires together. I don't know how this relates to the RTM treatment of practical syllogisms, but it seems there should be connections, at least.)
I don't know whether/where any of this sort of thing might have already been hashed out, though i guess I'd be surprised if it didn't have at least a cursory treatment somewhere out there. Maybe Tim Schroeder has written on it?ReplyDelete
Robbie's useful comment also touches on why I thought that part of the initial set-up was supposed to have inputs from beliefs: the stock examples of cognition developing new desires tend to incorporate beliefs as well. Just look at your initial example: " If I desire that p, and know full well that p only if q....". That seems to be the most natural kind of new-desire-cognition story that we commonly tell.