Nice write up. If I knew the book was like a choose your own adventure book, I might not have said no so quickly.
I don't have much to say about the content, but it does clarify for me exactly what the book is about.
One weird (and, I admit, kind of disappointing to me personally) aspect of the book is that so far it seems as if it is primarily about decision theory, motivated by certain putative kinds of choices one might make, and not, as I had hoped and the title suggested, primarily about transformative experience. Though obviously there is more to come.
Also my spellchecker apparently recognizes neither "transformative" nor "epistemically".
I mean, I think there are lots of things that could be said about experiences that are in some sense or another (whether either of Paul's two or in yet other senses) "transformative", and Paul's said the most about what seems to me the least interesting (epistemically transformative), and the things she's said thus far about personally transformative experiences seem, well, not totally worked out. (Cora Diamond in an essay says "I once stood on a ledge behind a waterfall, where all I could hear was the water thundering down, all I could see in front of me was thousands of gallons hurtling down. The experience I had I could describe only be saying something like 'Now I know what "down" means!'"; I claim that this is a totally comprehensible story even though of course she knew what "down" meant beforehand as well. Nevertheless it seems she's gained some kind of knowledge, doesn't it? I'm not entirely sure how to relate this to something like "now I know what durian tastes like". Or something like "now I understand how to go on" ("now I understand Wittgenstein on rules!"—perhaps I could easily give back an understanding, like Aristotle's akratic, but now I understand): it seems like it should be covered by the epistemically transformative, but I dunno if it is.)
Not to be all spoiler-y, but cochlear implants do come up in chapter 3.
4: this is part of my frustration with the book, too. And being further along than it sounds like you are, it hasn't been addressed yet.
7: I know, but I didn't think that was fair to put in the summary and I was trying to be fair to my initial reactions. Also apparently my initial reaction was to call chapter one the introduction and, thanks to that, chapter two chapter one. Sorry! Probably future posters should do better than phone-typing.
I also expected to have more about the experiences themselves than about creating a rational theory of transformative decisions. I'm also very skeptical about the rational basis of decision-making actually being the best way to make decisions, although I don't have a meaningful counterargument. I think I'm just more open to the description decision theory mentioned that discusses why and how people actually make decisions but not how they SHOULD make them.
I find the variety of "rational decision-making" discussed by Paul curious, or at least highly variant from the concept "rational decision-making" that I generally work with, and will be interested to learn (by proxy, as the library didn't have a copy I could get) if it is explained deeply in the book (or if there's a citation in the book to a paper where it's explained).
Oh, also I'll be traveling today and so if anything I said is ambiguous or just wrong, I probably won't be around to explain myself. That could be a good thing, I think.
I found myself stuck on really not seeing the weight of the central problem the book seems to be addressing, which, as I understood it is: (1) we have this normative decision model, where a person assesses the probabilities of various outcomes, determines the subjective value of each of the outcomes, does some quick multiplication, and does whatever has the highest expected value, and (2) it completely breaks down for personally transformative experiences, because personally transformative experiences can change you into someone with different subjective values for the various outcomes.
And I'm probably missing something fundamental, but the model really doesn't seem to me to break down. It gets a little more complicated, but doesn't break down.
In the vampire case, the problem is that you know that if you turn into a vampire, you'll have a different set of preferences than you have as you make the decision. But I don't see that breaking the normatively rational decision-making model. You can gather information about what your sparkly preferences are likely to be, by asking people like you before becoming sparkly what their preferences are now that they have become sparkly. At that point, you can do the math on (1) what are the odds of my future preferences, (2) what are the odds of various outcomes so (3) what's the expected value of all the outcomes considering my future preferences. Add in a factor for whether the fact of changing preferences as you rationally expect to based on the evidence is repugnant or not to you as you make the decision, and I don't see how this isn't a situation that you can perfectly adequately apply rational decisionmaking to.
12 kind of goes to 10 -- I'm summarizing her description of rational decisionmaking on pp. 21-24.
10: the first footnote on normative rational decision-making is
Paul Weirich (2004) gives an excellent treatment in his Realistic Decision Theory: Rules for Nonideal Agents in Nonideal Circumstances.
12.2: I think what's missing there is that her model of rational decision-making has as an obligate step in the value estimation the simulation of future states. So you simulate the future state internally, probe how you would feel, and then that's where you get your value estimate.
She seems to feel that is a critical component of rational decision-making, which is the thing I've never quite understood.
15: hm, yeah, that's a whole damn book she's citing, so I'm pretty much not going to get much out of it, but I think part of the problem might be that I'm coming from a, let's say, reinforcement learning perspective, where all you need out of the value function that it give you some information in order to make the decision you are best able to estimate as optimal given available information, whereas she is looking more at making the actual optimal decision as a necessary precondition for rationality? Maybe?
Huh. So information about your likely preferences in the future doesn't count as sufficient to support rationality unless you get there by internal simulation? That would explain what I'm missing, but I really don't get it. (And should reread to see where I missed it in the text.)
I mean, purely by introspection, I don't think simulation of future states of being is how I make decisions hardly at all ever. Very short-term, minor decisions ("I can almost taste that ice cream already,") maybe a little, but not anything with any significant time horizon.
Huh. So information about your likely preferences in the future doesn't count as sufficient to support rationality unless you get there by internal simulation? That would explain what I'm missing, but I really don't get it. (And should reread to see where I missed it in the text.)
I mean, purely by introspection, I don't think simulation of future states of being is how I make decisions hardly at all ever. Very short-term, minor decisions ("I can almost taste that ice cream already,") maybe a little, but not anything with any significant time horizon.
Although, if that's the issue, it's not clear to me in the text. On p. 29, Paul says:
Rather, before becoming a vampire, you cannot determine the subjective value of becoming a vampire at all, and thus you cannot, even in principle, make the right sorts of value-based comparisons.
I don't see her distinguishing between the sorts of things you can know about future preferences ("Alice is similar to me, she became sparkly last week, and she says it changed her preferences like so,") and the sorts of things you can't.
19: well, intuitively in the vampire case, something like "I don't want to do that because eating blood all the time is gross, and I can't imagine a future me that would be interested in that" doesn't seem too far outside the realm of real decision making, does it?
Sure, but (a) you can rationally incorporate that into your decision making along the lines of "Changing my preferences so as to incorporate blood-drinking is repugnant to me now, even if I'd enjoy it once my preferences changed," and (b) people decide to 'acquire tastes' all the time -- "I can't believe I'm going to put squishy, disgusting raw fish in my mouth. But if I fake it through the first couple of meals, people really do seem to get to like sushi." I don't see how that's an irrational basis for decisionmaking, even if it's based on a cold-blooded prediction about how people who force themselves to eat sushi tend to feel about it after a while, rather than an internal simulation of how it feels to enjoy eating sushi.
With the caveat that I've not read any of this, is the point supposed to be a phenomenological / qualia issue?
There's subjective quality of experience of 'being a vampire', or of 'eating sushi' that can't be had any other way than having that experience, and that knowing what the subjective quality of that experience is going to be like (for me) is going to be a crucial input into any rational decision-making that I might make about my future self.
[Not endorsing this line of argument]
That sounds about right, although I don't think I noticed the word qualia, which usually pops out for me because it's funny.
But if that's the whole thing, it seems to divide all sorts of perfectly conventional decisions into those that are 'rational' and those that can't possibly be in a way that doesn't seem to me to correspond well to any layperson's sense of rationality.
Have not read the thread, but:
Here's the main thing that is irritating to me: transformative experience and making hard decisions have barely any intersection.
AFAICT, nearly all transformative experiences don't involve choice, the most obvious example being puberty/adolescence. Watch out, you're about to get walloped by hormones and transforming peers! No one leaves unchanged. But also, death of a loved one, getting fired, living in Syria or somewhere else war-torn, having an extreme swing in your personal finances, etc...
Second, most hard, difficult decisions aren't transformative. Thorn's custody battle is difficult because she cares deeply about the kids and can't control Lee, not because it's unknown how Thorn will be transformed. If you are well-informed about two very shitty options - 'would you like to be fired and stay local, or be fired and move your family into your parents' basement?' - what makes the decision potentially difficult is the (legitimate) fears about both situations.
Third: Things that are actually transformative and a decision are almost never a one-time deal. It's not a fork in the road, it's a Plinko board. That is, if you're hungry for a major change in your life and you turn down the transfer to Europe, there may be fifty more daily things you find tempting before you decide to quit your job and go backpacking for a year around Mexico. The decision before you said more about yourself in the past than what was unknown about you in the future.
Finally! If there were an honest-to-go transformative decision to be made, rational decision theory and computing expected values would be the dumbest way to go about making your choice. What you should do is:
1. Thoroughly vet your options (to your limited ability)
2. Understand your fears and values
3. Go with your gut, even if it seems inconsistent with 1 and your fears. (It should basically be consistent with your values, though.)
The discussion of problems around your own preferences for your future preferences came up in Nick Bostrom's "Superintelligence" - AIs might want to rewrite their own code to change their preferences in some way.
He used the example of childbirth; you might, currently, be a child-hating monster, but you might still decide to have a child knowing that the experience would change you into someone who didn't hate children, or at least didn't hate all children.
Another interesting one is drug addiction, and the treatment thereof.
Have not read the book, but:
Heebie seems to say everything that needs to be said, and say it extremely well.
Yeah I don't get the problem either. Wouldn't asking your friends be good enough in the vampire hypothetical? Why would you be any different from your friends? You will probably like it too. Maybe that isn't what you should do if being a vampire is immoral, will disappoint Jesus when he come back in 2056 or is bad for your carbon footprint or something. But the experience of being a vampire isn't a big surprise.
The experience of being a vampire and a lycan at the same time is apparently unique.
Asking your friends about being a vampire might be insufficient, because they may not experience the loss of their former self as something to be sad about. But you still being you, it seems sad to contemplate losing yourself.
Qualia ... I won't be soothed over like smoothed over like milk
Which is why it's kind of a terrible example - you almost never 'lose yourself' during transformative experiences that you were able to choose. 'Losing yourself' is one of our biggest fears - will my depression/anxiety/grief become so overwhelming that I lose myself? But that is not what you're debating in a transformative choice. That can happen, no matter which way you choose. There are no guarantees or safe routes that you can avoid that possibility.
transforming peers
This made me think of a member of the House of Lords transforming into a car.
22, 23: yeah I think ttaM pretty much has it; this is one of the ways the "Mary the vision scientist" example comes in. Paul is asserting that there's a fundamental difference between making a decision when you have phenomenal access to your value function and when you merely know things about your value function. So you could have however high confidence that you understand how you will feel, without being able to imagine how you would feel in a usefully rich way.
She has tied this in other contexts (I believe) to neuroscience research that shows that the same mechanisms are recruited for episodic memory and counterfactual imagining about the future (e.g.)
24, 25: has she mentioned the "Mary the vision scientist" argument? That is essentially the classic argument for qualia, I believe.
Not to speak for the author of a book I have not actually read, but I suspect that Paul wouldn't disagree with any of 27. Her premise as far as I gather it seems to be that people vastly overrate "rational decision-making" (especially rational decision-making by simulating future phenomenal states) as an approach to making life decisions.
One of my problems with Paul's argument is that even given a looser definition of "rational decision-making" than the one she seems to be using (still haven't read the book, yep) my intuition would be that people generally don't make decisions that way; "rationality" tends to be more like "rationalization" after the fact.
37: She talks fairly explicitly about computing expected values, and says:
The reason why the normative standard is important to us, even practically, is because we want to know how reliably we, as decision-makers, can hope to meet this standard in our decision-making. I am especially interested in exploring the way we should regard and navigate deeply personal, centrally important, life-changing decisions, given the desirability of meeting the normative standard in such cases.
...and a whole bit elaborating on this. My understanding is that the normative standard is computing expected values.
Well right. I think she is arguing against the usefulness of the normative standard, but part of that is establishing that it is the normative standard and that's a thing people care about.
It's not imperfect knowledge that ties us up in knots, but the fear that we'll end up depressed/anxious/unhappy and unable to figure out how to get out. A decision between two bad choices is paralyzing, not because knowledge is imperfect but because both seem bad. Whereas if someone said, "You must go live in Shanghai or Helsinki, but I can guarantee that you will live within tolerable levels of loneliness, stress, and sadness, and be mostly content," then you'll have a very different time making that decision. That decision would probably feel fun and adventurous. But the danger of depressed/anxious/unhappy is always present and out of our control - we just generally compartmentalize and doot-doot along, and big decisions sometimes force us to stop compartmentalizing.
I also think you were hiding a little bit of utility maximization in 27.last. What is understanding your values and making a decision within that framework if not trying to estimate expected... value.
40 was not to anything in particular. Just transcribing a note I made to myself while reading.
40: well, again, so if the choice presented is "this might make you depressed/anxious/unhappy but it might be fine, and you have no real way to know that ahead of time" then that is setting up exactly the scenario you propose.
(I am, as I sorta expected, falling into the role of defending an argument I don't particularly agree with (also: haven't read the book). Oh well! I just want to note for the record that I don't particularly agree with Paul's argument.)
41: Trying to assign a point-value to your fears and value-systems is completely circular. "My gut says option B, so let me go back and screw around with the weighted average so that that actually computes."
so if the choice presented is "this might make you depressed/anxious/unhappy but it might be fine, and you have no real way to know that ahead of time" then that is setting up exactly the scenario you propose.
That is always the danger of the future. That is life. 2016 might make you depressed/anxious/unhappy, whether or not you have a transformative choice in front of you. Both options will always have that fear lurking.
44: I think you are applying a more literal frame work to "computing expected value" than is necessarily called for; I don't think Paul is assuming people sit down, try to write out a value function, and then do the math. More that there is an informal process of ("My gut says option B, but let me step back: given this positive factor and this negative factor and these positive factors and this BIG negative factor, do I really think I would be happy with this choice?")
45: and you don't think that people try to mitigate that fear when they think they have some control over it?
"Normative decision model" -- I understood this to be the Normative Decision Model also known as WWND in which you ask yourself -- what would Norm from Cheers do in this situation?
Apparently I think "frame work" is two words.
To become the "we" being described in the book, we readers have to be invested or at least interested in normative decision-making standards that provide guidelines for making rational decisions, namely that the chooser should select the path with the highest expected value.
This is a "we" I could only make a half-hearted attempt to pretend to be a part of.
("My gut says option B, but let me step back: given this positive factor and this negative factor and these positive factors and this BIG negative factor, do I really think I would be happy with this choice?")
And I'm saying that "overriding the expected value of the factors" is something your gut may tell you to do. Sure, you can go back and wonder if you were mis-weighing something, but sometimes the wrong choice is the right choice.
And I don't mean wrong choice in hindsight. I mean at the time of the decision, sometimes it's right to make the stupid choice.
It's not imperfect knowledge that ties us up in knots, but the fear that we'll end up depressed/anxious/unhappy and unable to figure out how to get out.
It's not like I'm reading the book, but I do think people fear that they'll end up someone else -- "I don't want to be the kind of person who is happy in suburbia!!!" or whatever. I know upthread you said that's not a matter in your control, but surely people do sometimes worry about it with respect to the choices they make.
54: How do you know? How can you possibly ever know?
I mean at the time of the decision, sometimes it's right to make the stupid choice.
That's why we have beer.
Sure, but it's the fear that's making the decision difficult, not the imperfect knowledge. Or rather, calling everything "imperfect knowledge" is kind of silly when it's a relatively well-understood fear that is closely tied to who you are at that moment, and the only "imperfect knowledge" is whether or not it will actually come to pass.
Durian was about as gross as I expected it to be, except more in texture than smell. Apparently the one I tried didn't smell as bad as it usually smells.
I wonder if the issue here is that we're not sure what kind of a book this is. Is it a philosophy book dealing with an aspect of rational decision making theory? Or is it a book for a reader that is looking for guidance on how to make a major life-decisions?
58: I don't really understand why that's silly.
58: All you need is an unbiased estimate of the odds of whether or not it will come to pass and then the math is simple.
I thought durian was super yummy. Like mango ice cream. And it smelled as bad as advertised.
62: minus "a" or minus "s" -- your choice.
Is imperfect knowledge when you don't know what's going to happen buy you have a probabilistic estimate of the chances of possible outcomes or when you don't know your probabilities or their confidence intervals?
58: I don't really understand why that's silly.
It's reducing "tough decisions" down to "THE FUTURE IS AN ABYSS!!!" when it you can say something much, much more specific about why the decision is tough. The big reasons why tough decisions are tough are rooted in our fears, which is "THE PAST!" not the future. The only contribution of the future is to determine which outcome unfolds.
67: She seems to flip back and forth, but mostly she's gearing up towards dealing with the latter.
68: well, sure, but insofar as our decisions about the future are relying on our past experiences -- which in a very direct way is, I think, the framework that Paul would like t oassume -- then the problem in this case is not that you are scared of something because you have past experience with it, but that you are scared of something because you lack the experience (and values, and knowledge) to understand what it will be like based on things that you have previously experienced. So sure, it's fear, but it's fear of the unknown (unknowable) born of a lack of knowledge. Like, just to veer into a completely banned analogy, nobody is scared of death because they've been dead before and didn't like it.
Isn't there a famous philosophical story about a donkey that couldn't decide between 2 perfectly good pieces of hay and so wound up starving to death?
73: But Paul isn't interested in fear and why some particular decision is hard because it involves confronting your fraught relationship with your father. She just wants there to be a way to make a rational decision.
the problem in this case is not that you are scared of something because you have past experience with it, but that you are scared of something because you lack the experience (and values, and knowledge) to understand what it will be like based on things that you have previously experienced.
I disagree with this. It's generally fear of intolerable amounts of stress, loneliness, grief, depression, or so on. Something that you have experienced, and that you can actually imagine an intensified version, and it sounds awful. One common major fear of parenting is that you will drown in busywork and chores. Doesn't anybody know what that might be like, whether or not they have kids?
75.last: I'd put that as "she wants to know if there is a way to make a rational decision". And agree she's not interested in fear per se.
Like, just to veer into a completely banned analogy, nobody is scared of death because they've been dead before and didn't like it.
I'm scared of death for very well-understood reasons. I'm scared of it because I'm sad to miss life and scared to abandon those who depend on me.
77: I accept your correction! Not having read any of the book has allowed you to understand it best.
76: well yes, okay, the bad outcome -- you're unhappy all the time -- is imaginable. So one way to think about it (in the parenting case) is on the one hand you can imagine being sleep deprived, overwhelmved, stressed, depressed, trapped, etcetera. But you can't imagine the ineffable joy of having children, insofar as that's a plausible real outcome. So is it then true that the only rational decision to be made w/r/t having children is not to have them?
79: yeah it's a little absurd, right? I did read the original paper and I have seen her give a talk on this work and had a chance to ask her some questions about it, so I'm not operating from, like, complete ignorance. But I'm probably still being ridiculous.
78: Yes, but people that hate their lives, and are all alone, have also been known to fear death.
But you can't imagine the ineffable joy of having children, insofar as that's a plausible real outcome. So is it then true that the only rational decision to be made w/r/t having children is not to have them?
Or maybe rational decision theory is dumb?
24, 25: has she mentioned the "Mary the vision scientist" argument? That is essentially the classic argument for qualia, I believe.
Yes.
That's kind of what I got out of economics. Or at least that rational decision theory was so dependent on the assumptions you made that it was pointless.
So is it then true that the only rational decision to be made w/r/t having children is not to have them?
SPOILER ALERT!!!
Later on, Paul explicitly considers this possibility and rejects it.
80.last: in fact Paul comes pretty close to saying exactly that.
I was initially going to append this to Thorn's thing, but it got long (over 5,000 words long), so instead I made a post elsewhere.
82: That's probably at least partially the reason vampire fiction is so popular.
83: that's certainly possible. But I kind of buy Paul's premise that some large number of people at least think that they're making rational decisions about e.g. whether (or when, or under what circumstances) to have a child.
Not to speak for the author of a book I have not actually read, but I suspect that Paul wouldn't disagree with any of 27. Her premise as far as I gather it seems to be that people vastly overrate "rational decision-making" (especially rational decision-making by simulating future phenomenal states) as an approach to making life decisions.
But everything she's said about "transformative experience" really is couched in terms of (a) making a choice to (b) have an experience with the following properties. So if that's going to motivate an argument that people should revise how they think of rational decision-making, one wants to know if that ever actually does happen.
Paul's premise that some large number of people at least think that they're making rational decisions about e.g. whether (or when, or under what circumstances) to have a child.
Have we corrected for who had an econ 101 class?
But you can't imagine the ineffable joy of having children,
You can imagine that you might experience ineffable joy, and you are presumably aware that that's a positive outcome, though; you really need the idea that you have to be acquainted with the subjective state to be able to decide to get the ball rolling, afaict.
91: sure, lots of people think they're making rational choices. But once you start to tease apart their thinking, they're really not. (This is basically my main takeaway from the book: rational choice is an illusion and you'll just tie yourself in knots if you think about it too much. For some reason people seem uncomfortable with the idea that make life decisions are made irrationally, though.)
(This is basically my main takeaway from the book: rational choice is an illusion and you'll just tie yourself in knots if you think about it too much. For some reason people seem uncomfortable with the idea that make life decisions are made irrationally, though.)
BUT: "Now, in fact, I think it does make sense to ask how rational agents should make transformative decisions, because I think agents can meet the relevant normative standard. So, in the end, I will argue that normative decision theory does apply. But there is a catch: in order for standard decision theory to apply, we will have to reject or significantly modify a deeply ingrained, very natural approach to making such decisions, the approach that takes subjective values of one's future lived experience into account." (33)
I admit you're farther along than I am.
95, 96: That was Josh's takeaway -- I don't think he was meaning to suggest that was Paul's actual argument.
94: yes, I think that's right. If you don't buy the qualia of decision-making premise then you aren't really going to go along.
Honestly, I feel like the author is basing this on hyper-analytical ruminating types of academics who endlessly wring their hands and couch everything intellectually to avoid admitting their deep fears.
When the vast majority of people have a million different, other creative ways to avoid admitting their fears!
I can't figure out Paul's connection to that link. That link seemed very sensible to me. And it basically follows the format of 27 - vet the pros and cons, ask about values and fears, and then finally, go with your gut.
39: If what's going on is that Paul's arguing against the normative model of rational decision making (am I garbling what to call this? I don't have the book with me at work), then it seems to me that she's making a bit of a strawman of it. (Not that I, myself, think that anything terribly close to that model is how people do or should make decisions. But it seems as if something like it is more workable than she argues.)
She's taking it as a not-particularly thoroughly argued premise that qualia/direct experience is a sine qua non of rational decisionmaking -- that other types of information just don't count toward supporting a rational decision. (A) I just don't get the argument for that at all, and (B) by introspection, if I understand what she's talking about psychologically, I find that decisionmaking based on what internally feels like the simulation of future states of mind based on past experience is not just not the only way to make a rational decision, it's kind of a terrible one in terms of achieving my actual greatest expected value.
I talk about skiing here a fair amount. I'm a coward who doesn't like discomfort, exhaustion, or being cold. When I think about skiing, the bits of it I don't like are more viscerally vivid than the bits I like -- I can really feel being terrified with sore feet and shivering from a combination of cold and being completely worn out. Pretty much every time I plan a ski trip, I have a period, right up to when I'm on the lifts, when I'm dreading it, and I only actually go because I'm committed and I'd be wasting the money I spent if I didn't.
In practice, though, I really really do enjoy it; it's just that there's something about the miserable parts of it that sticks with me viscerally more vividly than how much fun it is (probably because I'm fundamentally a miserable bastard, internally). So to make a decision that will actually get me to do something that leaves me grinning like an idiot, I have to consciously discount the internal simulation I'm running of how much it's going to suck, and decide much more on remembered facts (e.g., that every time a ski trip ends I'm planning how I can rearrange my life to ski much more) than remembered feelings.
Oh, wait. 104 crossed with 96, which seems to go right to what's bothering me. Is what's going on that the whole book is a rejection of the qualia of decision-making premise, and I'm stuck on why you'd take that as a premise to begin with?
Is what's going on that the whole book is a rejection of the qualia of decision-making premise, and I'm stuck on why you'd take that as a premise to begin with?
I'll just quote myself:
Let us set aside, for my now, my impression that the said approach is neither deeply ingrained nor very natural. Here is my question. Why not say this straight out? Why coyly say, way back at p 14 (emphasis is added), "Subjective values play an important role ... if, when we evaluate our alternatives, we choose between them based on the expected subjective value of an act", for instance? When I got there, I wrote "big if!" in the margin of the book, thinking that perhaps it was being implicitly endorsed. It is, apparently, to be rejected. I understand attempting to establish that there is a problem before delivering the solution. Sometimes the way out is through! But this felt as if the author was in possession of a secret all along, or something. (Partly, also, since I never thought the supposedly ingrained tendency was attractive, I felt somewhat as if it had just been revealed that there had actually been no reason for me to read the preceding 32 pages, or at least no reason to have gotten so het up in the process.)
Yeah, I'm reading your thing now, and we're very much on the same page, except that you actually have some background to talk about this stuff from.
104: well, you have foiled my attempt to defend Paul's argument because I essentially agree with you (and I think the skiing argument is a good one). I will say that there is a related active debate in the cognitive sciences about whether simulation or theory-theory is the better way to understand actual Theory of Mind processes in the brain, so it would probably be fair-ish to assume that Paul is starting from a relatively strong simulation conception of Theory of Own Future Mind (totally just made that up; I'm sure there's a more technical way to put it) which, however thoroughly she argues it is certainly an idea with support elsewhere. (more on simulation vs. theory-theory)
I do think one can make a stronger case against advice from one's idiot friends than we are allowing here.
Like, in the vampire example: your friends have made an irrevocable decision to turn to a life of cannibalism. Regardless of your assumptions about the similarity of your values before, does it not make sense to -- given your current, non-cannibalistic value system -- distrust people who have vocally and evidently adopted a value system where limited, restricted cannibalism is perfectly fine? Especially when the alternative for them would be to admit regret at making an irrevocable decision with enough upside that one could plausibly tell oneself it had been a good idea if one could get past the cannibalism thing?
104 is interesting mostly because it's exactly the opposite reaction I have to skiing. If you ask me, I would tell you I love it and would like to do it more often. Actually doing it, my overwhelming impressions through the course of the day are usually (1) it's fucking freezing, (2) these boots are uncomfortable, (3) I'm unpleasantly worried I'm going to hurt myself and/or ouch, damn, I've hurt myself. But then I get home and all of that fades and I just remember the fun parts, and I begin to want to go again.
Yeah, the skiing thing is a good example.
Why not say this straight out? Why coyly say... I felt somewhat as if it had just been revealed that there had actually been no reason for me to read the preceding 32 pages
I feel like this is a common pitfall for people who are trying to be fairly engaging writers while writing something academic -- it's tempting to set your points up as engaging reveals! and reversals! because the storytelling impulse makes that feel like a good idea, but for the actual purpose at hand, it really isn't.
110: I had assumed that we were supposed to just take it as given that they were reporting their genuine subjective state, and that it was the subjective state that you would genuinely enjoy (in the sense of have, I mean) if you made the same decision.
The idea that there's something terrible about that decision and you shouldn't make it even though after having made it you won't find it to have been terrible is one that makes a lot of sense to me, but if the important thing is supposed to be "how will it be for me subjectively?" then it seems like it just doesn't apply.
vocally and evidently adopted a value system where limited, restricted cannibalism is perfectly fine
I'll have you know that some of my best friends are Christian.
I myself have this problem a lot, ahem. Also I have the other problem where there are approximately eight million things I feel like I need to say up front, but then you've said so much up front that everyone has completely lost track of what it's supposed to be up front of.
I am not reading the book, but am interested to follow the discussion and, so far, I'm feeling like the questions* that I asked earlier were good ones, and that it may take a while to get a satisfying answer.
Also, regarding the Vampire example, I'm curious if she addresses the possibility that part of the nature of being a Vampire is that it forces one to lie to other people about the quality of the experience (there's an oglaf cartoon which is relevant which I will link to when I'm not at work). That's a trivial point, because the question doesn't really apply to any of the non-vampire examples, but it's one that I keep thinking about.
* comments 49 and 52
110, 117: The "maybe everyone who says they like it is just lying" problem is a real one in real life -- I think about that in terms of law firms, where people who are successful in law firms mostly purport to be happy, but I suspect them of putting a good face on having kind of wrecked their lives in the pursuit of law-firm success.
118: see also startups. OTOH as helpy-chalk has pointed out, people are really good at rationalizing pain (his example was a wrongfully-convicted man who claimed that he wasn't bitter, and that his experience in prison had taught him a lot).
Law school is really a generalist type of education. Whether you use the skills you acquire for working in a law firm or living in prison is up to you.
People are really good at rationalizing, full-stop. One of the leading theories of self-knowledge is, essentially, the theory-theory idea: we observe our behavior and then come up with a rational explanation for why we must have done it.
people who are successful in law firms mostly purport to be happy
Um, maybe to prospective associates they are trying to recruit, or to colleagues if they think expressing discontent might impact their careers, or to third parties in casual conversation where admitting misery would be inappropriate, but, in a more general sense, no, they mostly don't purport to be happy.
121: sounds right to me. Again, it seems to make people really uncomfortable.
Huh. Seems like I ran into a lot of people who claimed to love what they do, in ways that I found unconvincing. Possibly of course they were all lying to me, specifically.
Apparently, I'm only happy when I'm uncomfortable.
111: This seems as if we could work out a solution where you impulsively buy lift tickets and then send them to me so you don't have to use them.
we observe our behavior and then come up with a rational explanation for why we must have done it.
Yes, this. I mean, we're capable of behaving otherwise, but this sure captures a lot.
Isn't there a famous philosophical story about a donkey . . .
Buridan's Ass, which we only associate with Buridan because Spinoza misremembered something.
127: Yes, sometimes we observe our behavior, and acknowledge that it makes no sense at all.
I find no appeal at all in theory-theory as an account of self knowledge.
130: maybe you're just not very good at rationalizing.
I believe 121 totally, which probably means that there's something wrong with it, and some cognitive bias makes me find it attractive.
121 is OK as far as it goes, but seems to presuppose a desire for explanation or for self knowledge.
Much of human behaviour is an active aversion to self knowledge-- at least some of the time, simple denial of past action is a viable alternative to explaining to oneself why a loved one was betrayed, or why greed or laziness dominated yet another day.
It's not at all clear to me that reason is especially helpful for thinking about human behavior-- there's a set of reasonable caricatures that fit some behavior some of the time, with 121 one of the more sophisticated ones. But how to choose which cartoon to apply in which situation?
Chanced upon this, and it seemed relevant.
http://www.nietzschefamilycircus.com/perm.php?c=99&q=35
Experience is what keeps a man who makes the same mistake twice from admitting it the third time.
ALternately, on acuity of perception,
If you see one redwood, you've seen them all
-Ronald Reagan.
Seeking a rational basis for approaching introspective, considered behavior is kind of an interesting exercise. But how typical is introspective considered behavior? Addiction, confrontation, habit-- I don't see how to get similar answers from any of these starting points.
post transformative experience ergo propter rational decision-making
It is my belief no man ever understands quite his own artful dodges to escape from the grim shadow of self-knowledge.
I mostly agree with all the criticisms in this thread. So, on to the important question: can someone who has read further tell me if it gets better? Because otherwise I am probably disinclined to keep reading.
How'd you get through law school with that attitude?
138: I'm into the afterword now, and yes I can tell you if it gets better.
Can you tell him that it gets better?
Be careful, having already read the book, Josh will probably rationalize the benefits of having made that mildly transformative decision.
I'm interested, to those who are reading the book, if Paul ever rigorously defines a transformative decision in terms of the specifically negative possible consequences -- that certainly seems important to the conception, and I'm not sure I've grasped that piece of it specifically.
Broadly stated, the task is to replace the global rationality of economic man with a kind of rational behavior that is compatible with the access to information and the computational capacities that are actually possessed by organisms, including man, in the kinds of environments in which such organisms exist.
I gotta say, it *does* strike me as a little weird that the afterword is almost half as long as the body of the book, and honestly seems in a lot of ways more substantive. Is this normal for philosophy books these days?
Man, Herb Simon. Awesome. What is up. Good to see you here.
But maybe that wasn't a sincere question.
On the whole, I'd rather be in Pittsburgh.
Surely someone has written something featuring a blog that becomes the medium through which the dead (not necessarily just social scientists, but lots of other people too) express themselves.
74, 128: The wannabe VC (in real life, midlevel sales executive) who lent some money to my startup and then made life difficult by trying to manage it told a version of Buridan's ass in which SCIENCE had PROVED that if you put two equidistant pellets in front of a rat, it will short-circuit and starve. I think he sincerely believed it. He then parlayed the story into management advice for the startup, but whatever the advice was we didn't take it.
I guess that could be part of the premise.
A premise and its contrapositive are logically a corpse.
156: rats, ironically, are smarter than that man.
wannabe VC (in real life, midlevel sales executive)
Is this, like, a thing? It doesn't seem like those types make enough money to even pretend to be VCs.
It's a thing out here; there are always people willing to believe that a brilliant idea plus a five-figure investment is enough to bake you an Uber or whatever. This guy pulled together the investment by pooling his own money with that of his richer friends. Unfortunately he didn't have anyone better than us to spend it on.
And at that point in my life I was only pretending to be a developer, so shame on everyone, but at least I got a career change out of it.
Separating a wannabe VC from his money is nothing to be ashamed of.
At that point in your life, did you understand what it would be like to be a developer?
No, but I had no reason to doubt those who told me the blood was delicious, and as a humanities grad student I'd certainly had my fill of ichor.
I never noticed any divinity coursing through my veins as a grad student.
Indeed no. I guess "hemolymph" is the correct term for what I had in mind.
Enough things are similar to durian that you can guess what it's like. I have tasted durian. It wasn't transformative for me. It was too heating. Also you don't eat it for breakfast.
What does it say about me that I find the vampire examples more compelling than the ones about seeing red and eating actual fruit!
156/162/163: The thing that stops a lot of midlevel sales executives from being wannabe VC is the legal requirement that investors in private companies be "accredited"--meaning either $200k+ in annual income or $1M+ in net worth. So, this guy must have been pretty good at his sales job, I guess.
If this was in the SFBA, being reasonably competent at a senior-ish sales job and/or owning a house would be sufficient to meet those conditions.
I do think deciding whether I'd like to become a vampire would be a very difficult decision. But I think that's mostly because there are a lot of unknowns to the decision which, if vampires were actually real, I think would be mostly a lot better known. In which case I think I could evaluate the decision much more rationally.
I can't see why on earth I'd want to be a vampire. Unless the numbers of vampires were such that my current life was going to become miserable, I'd probably just stick with things as they are.
Come on in, heebie. The water's fine.
I felt sure I'd see "Come play with us...forever and ever and ever" when I saw my vampire friends pop up on the side bar.
Ugh, you're so basic.
Hmm I swear that was a question mark when I typed it.
I know, deprecated: https://xkcd.com/1170/
It feels like both the "informed gut" and the "rational probability aggregate" decision techniques are problematic.
The vampire example is powerful because transformation is a type of trauma and trauma and like death; you are not the same person on the other side of it. Not having to define this exhaustively is ok to me but sits weirdly with the plain English slightly pedantic analytical style of the book to me. It's some kind of American philosophy house style that reminds me of Rawls, good and bad.
With Greece and climate change in the news, the weighted probability model reminds me more of transformative political experience than individual decisions. Both seem fair examples of epistemically impacting decisions as well. Should you transform into a vampire squid?
I think it's fair to claim the feeling of raising a child is beyond some boundary of pre-child knowledge. Perhaps some people experience it as estending the spectrum of the biographical experience though.
Is the whole thing a potential application area for linked brains?
Over the weekend I watched The Fly and Transcendence, both of which involve people trying to encourage others to adopt their transformative experience. I found the uploaded human/AI's pitch more compelling.
I've eaten durian. While I may not know what down is, I do know what the taste of sweat-socks-used-by-a-14-year-old-for-a-week-and-left-in-a-school-locker-for-two is.
Evolutionarily, bad tasting items taste bad because they will surely kill you. So you avoid them and live. As you are introduced to new foods and a bad tasting item doesn't kill you, your brain adapts and what once tasted bad often becomes good such that you can't even imagine that you ever thought it tasted bad.
Can a vampire even conceive of what it is like to be human any more than a human can conceive of what it is like to be a vampire?
Hey, my book just came recently, and now I have done the reading, so I can post to this thread.
My first thought is that I'm surprised the book doesn't seem to talk about Jonathan Glover's (1984) What Sort of People Should There Be, which deals with the same kind of issues, but at a species wide level. For instance, suppose we develop a technology that lets us mush all our brains into one brain, as in the monkeytorture thread I didn't read. As it happens, once we have all our brains mushed together, we will have a value system that tells us this is the best way to be. Right now, however, we are grossed out by having our personal borders violated. How do we evaluate the change?
Ok, off to swim run. I'll catch up on the thread next.
183
I do know what the taste of sweat-socks-used-by-a-14-year-old-for-a-week-and-left-in-a-school-locker-for-two is.
Go on.
NickS in 117: I'm curious if she addresses the possibility that part of the nature of being a Vampire is that it forces one to lie to other people about the quality of the experience (there's an oglaf cartoon which is relevant which I will link to when I'm not at work).
Yes, starting on page 45: "Maybe they harbor secret regrets. Maybe something about being a vampire warps their views."
This is an important premise for her, because it helps her play down the reliability of forms of decision making that don't involve mentally simulating what the future will be like.
This is an important premise for her, because it helps her play down the reliability of forms of decision making that don't involve mentally simulating what the future will be like.
I don't see how that possibly works.
LB She's taking it as a not-particularly thoroughly argued premise that qualia/direct experience is a sine qua non of rational decisionmaking -- that other types of information just don't count toward supporting a rational decision. (A) I just don't get the argument for that at all,
Oddly, she thinks that imagining how your future self will feel is the obvious default mode for people. So on 25 she says "It is worth noting that this approach dovetails with a predominant cultural paradigm of how to approach decisions about our own lives," and on 33 she says that saving normative decision theory means "we will have to reject or significantly modify a deeply ingrained very natural approach to making such decisions."
Although she provides no evidence that this is the natural way to do things, I see where she is coming from. People make decisions using their feels, and this is an example of it. Still, I don't know why we should be surprised that sticking with normative rational decision theory will require us to reject a normal, intuition based way to do things.
187: She's in kinda an odd position, where she has to motivate the belief that a certain method of decision making is important, then present examples where that method isn't possible, and then (I think) say the method wasn't that important to rationality to begin with.
So sometimes she needs to undercut objections like the one LB makes in 12
"You can gather information about what your sparkly preferences are likely to be, by asking people like you before becoming sparkly what their preferences are now that they have become sparkly. At that point, you can do the math on (1) what are the odds of my future preferences, (2) what are the odds of various outcomes so (3) what's the expected value of all the outcomes considering my future preferences."
I think Paul wants to reply here by saying "Sure, you can do this using decision theory, but look at how uncertain it is---vampires could be lying! I know you really want to use cognitive simulation now, don't you? But you can't!"
Well, there's a big difference between "secret regrets" and "warped views" because in the second, I mean, what's warped? Does that mean they wouldn't like it but for being vampires? But they are vampires, and so would you be, and you'd be warped like them, so who cares? And I think "they could all be secretly regretful, and putting on a brave face!" is a mighty thin reed, and it doesn't seem, in context, like one she's putting a lot of weight on.
But they are vampires, and so would you be, and you'd be warped like them, so who cares?
Yeah, I think the big problem here is she is used the debased language of economics, where values become preferences, so it is really hard for her to say things like "I don't want to be the kind of kind of person who would have their preferences satisfied this way.
Leon Kass spends some time in his fulminations against reproductive cloning worrying that a future will come when everyone thinks cloning is perfectly normal and not repugnant at all. So everyone will be immoral and not know it! Kass is an idiot, of course, but I think this kind of worry is reasonable, at least for things other than cloning, and that is what Paul is trying to represent.
(See also the worry that Glover has which I mentioned in 184: what if at some point we face the decision of whether to mush all our brains together. We might value that state after it has happened, but be disgusted by it now. Which value system do we use to make the decision?)
Yeah, I think the big problem here is she is used the debased language of economics, where values become preferences, so it is really hard for her to say things like "I don't want to be the kind of kind of person who would have their preferences satisfied this way.
But it's easy for me to say that!
that is what Paul is trying to represent.
I'm not convinced that that's the case.
192: Yeah, I saw that you addressed these issues. I like the Nehamas example. And you are very right that the book is weirdly organized.
Hey, I just sent neb a big long thing about Chapter 2! Sure do hope it doesn't make me look stupid.
Or chapter 3, I suppose. Neb, edit!
I just sent neb a big long thing
Now he will have two.