A guest post by Matt Yglesias, everyone. Thanks, Matt.
Allow me to add that John Emerson should stay well away from this essay and indeed practically all the literature on this so-called problem.
1: Ah, thank you. One of the problems with occasionally reading something by a philosophy type is that when that read bit is incoherent, it's unclear whether that's a function of the poor writing or ignorant reading.
there is nothing that it its to be
A phrase of art?
A mistake. "There is nothing that it is to be".
I make plentiful typos in all transcriptions; I don't see why ogged singles this one out for mockery. Except that he's a meanieface.
Since this is already about philosophy of possible worlds and mentions Yglesias in the first comment, can we talk about the post title, "And If Obama Were A Giraffe, He'd Have a Really Short Neck." I was trying to start an argument about whether Obama would have a short neck in the possible world where he's a giraffe, but no one else seemed interested.
Well, come on, that thing with the cup requires two girls, not one, so the thing with the cup that Serena and Charlie are up to is clearly different. And anyway, SG clearly just lacks imagination; there are plenty of arousing things to be done with a cup.
Gwyneth Paltrow's hip to the cupping action.
Re 7:
My intuitions agree with washerdreyer's. Obama appears relatively long-necked for a human. In the closest possible world in which Osama is a giraffe, I expect him to be equally as long-necked, relative to the local population of giraffes.
I was trying to start an argument about whether Obama would have a short neck in the possible world where he's a giraffe, but no one else seemed interested.
You appear to be thinking of the possible world in which Mr. Obama is an okapi, no?
Osama
Oh yeah, I almost forgot. We're screwed.
Obama appears relatively long-necked for a human. In the closest possible world in which Osama is a giraffe, I expect him to be equally as long-necked, relative to the local population of giraffes.
No, see, Obama's long-necked for a human, but if he were a giraffe with the same length of neck he would be short-necked. We're talking absolute neck length, not relative.
As always, we face a problem of definitions. Do you mean "a possible world in which Obama is, rather than homo sapiens, a normal specimen of giraffa camelopardalis", or do you mean "a possible world in which the homo sapiens named B. H. Obama is classified as a giraffe?" In the first, he would have a long neck, in the second, a short one.
ah, but teo, if he were a giraffe, he'd be a giraffe. It's certainly that true that Obama right now has a short neck for a giraffe, but that doesn't tell us if giraffe-obama would also have a short neck for a giraffe.
For instance, if I were to say, "I wish I was a dolphin" I wouldn't be expressing a wish that nothing about change except that I suddenly become classified as a member of another species.
16: See 15. Yglesias clearly had the second scenario in mind.
crap, "I wish I were a dolphin." I also wouldn't be trying to become a member of a shitty football team.
Google is not doing much to enlighten me on this cupping shit, but it sounds painful. Not that there's anything wrong with that.
this cupping shit
You're pretty close right there, actually.
...or maybe some strange human-giraffe hybrid.
With frickin' lasers on its head.
This is reminding me of the Lincoln joke: "How many legs does a donkey have, if we call its tail a leg?"
"Four. Calling the tail a leg doesn't make it one."
20, Wikipedia is your friend.
Is anybody going to explain what the hell Gendler is talking about?
Obama appears relatively long-necked for a human. In the closest possible world in which Osama is a giraffe, I expect him to be equally as long-necked, relative to the local population of giraffes.
I like to imagine 10 as an endorsement of Obama by an open-minded giraffe.
27: I think he's saying that there is no "thing with the cup" that makes sense in that passage from Wolfe. w-lfs-n refutes him by pulling tea for two out of his ass.
One sees the giraffe in a debate with Obama, looking down disdainfully: "You're longnecked enough."
Gendler's female. And she's arguing that while we know from context that 'the-thing-with-the-cup' must be naughty, it's not because of any one-to-one correspondence with a naughty thing in the real world.
If Obama Were a Giraffe, He'd Be a Venomous Cult Leader Giraffe
I think Gendler was the TA for a course that I endured once.
32: But: "There are no extra body parts, no extra positions, no extra ways in which something that is not arousing in this world is arousing in that world.' That is, there isn't a MAF naughty thing that corresponds to a real world not naughty thing, no? So either it floats, unconnected, in MAF world, or it doesn't exist. I would think.
In the closest possible world in which Obama is a giraffe, all of us are giraffes. Even those of us who are ducks.
I'm going to hit someone with a copy of Naming and Necessity, any second now. And perhaps I'll add to it by strapping a copy of Lewis' Counterfactuals to it to add some extra heft.
35: Well, that's why it's a puzzle. You're right that it shouldn't be arousing, but it's hard to read that passage and think 'they thought they were aroused by the thing with the cup, but weren't.'
To 35: Yeah, I agree: I also think she is implying that there is either (a) no activity the author had in mind (like the "Venus Butterfly" from LA Law) or, more strongly, (b) no activity that could be coherently imagined as actual that "the thing with the cup" corresponds to.
To 37: Lewis would allow for Obama-giraffes, wouldn't he?
I understand and agree with 39.1. This from 35, though:
So either it floats, unconnected, in MAF world, or it doesn't exist. I would think.
I'm not sure what it means. "The thing with the cup" has to exist in MAF world -- the presumably truthful narration reports it as existing.
40: I think, if I read 39 properly, we're saying the same thing, though 39 says it so much more clearly that it might be a different thing. I think the "more strongly" part is that "thing with the cups" can't be imagined even by the rules of MAF world.
In the possible world in which ttaM is Samuel Johnson, Refutation by Projectile is a valid argument form.
42: Actually, reading 39 again, I think I'm wrong about "more strongly." But that's what I meant.
can't be imagined even by the rules of MAF world.
How can that be? The characters aren't just imagining it, they're doing, and enjoying, it. We can't imagine it, sure, but the rules of MAF world have to be such that there is some 'thing with the cup' that's an arousing activity; just not one that can be comprehended in our world.
The point of the Yglesian Counterfactual (as I propose we call "If Obama were a giraffe..." from now on) is, of course, that he's making an analogy with Michael O'Hanlon saying something like "if Obama thinks direct meetings with leaders of enemy countries will solve all our foreign policy problems, he's totally and dangerously wrong." Since Obama doesn't in fact think that, speculating about what it would say about him if he did is pointless except to smear him. Similarly, since Obama is not in fact a giraffe, talking about how his neck (that is, the actual neck that Obama the human being has) would be short if he were one is pointless and absurd, and could only serve the purpose of discrediting him with the all-important giraffe demographic.
45: That's exactly it. If I remember the rest of the essay correctly, the presumption is that MAF-world is like ours in lots (maybe even all) of relevant ways, except for this indescribable thing-with-a-cup. That should strike us as a little weird.
I tend to find that all the people I like understand the following two sentences immediately:
If I was a cat I'd never lick my arse because that's disgusting. BUt if I was a cat, I'd lick my arse anyway because I wouldn't be me, I'd be a cat
and all the people I don't like get confused by it. It's like that funeral puzzle test for psychopaths.
45: Well, it could be a mistake, like saying that Superman both can and can't look through walls made of lead. But the weaker argument is probably better.
What does "MAF" stand for?
Was it obvious to everyone else that Wolfe emphasized the ambiguity of "the thing with the cup" to a comical extent, thus making it obvious that it's not really ambiguous because the author is making it perfectly clear that for no good reason he won't tell you what "the thing" is?
Is this really what philosophy is reduced to?
wait...have 5 different people so far used "MAF" as an abbreviation for "A Man In Full", without noticing? Or does MAF mean something else?
No, Ned, you're just smarter than everyone else.
It's like that funeral puzzle test for psychopaths.
?
Is that sarcastic? Does it mean something else?
I knew I shouldn't have entered the actual-philosophy thread.
a MAn in Full. Why anyone would abbreviate Man In Full that way I couldn't tell you.
46: I don't deny that the author's intention is to create a parallel like the one you propose. I deny that the natural interpretation of the Yglesian Counterfactual does create such a parallel.
I deny that the natural interpretation of the Yglesian Counterfactual does create such a parallel.
I don't disagree with that. It's certainly counterintuitive.
54: No, you got it right. (Actually, I'd noticed it, but figured that since everyone seemed to be using the abbreviation fluently, straightening it out was more trouble than it was worth.)
Look, the question isn't just that Gendler didn't understand exaggeration for comic effect. The problem is that the traditional analyses of passages like that can't explain our (completely expected) interpretation of the passage because those traditional analyses would predict the wrong response to a passage like that.
The whole paper is actually pretty interesting (even though I disagree with her conclusions), and it sits at this bizarre intersection between psychology and philosophy.
So, yeah, that's what it's reduced to.
54: Basically, I fucked up the abbreviation and everyone else just let it go.
straightening it out was more trouble than it was worth
New mouseover.
46: I agree. I, at least, was just goofing off about what philosophy is ultimately reduced to. (Nevertheless, I do believe, for the reasons stated, the following: if Obama were a giraffe in a Disney animation featuring the presidential candidates represented as non-human animals, he would have a long neck.).
47, 58: That seems exactly right, and what she's ultimately interested in is which anomalies we let slide and which we "resist" when interacting with fictions:
(1) People are just like us in a world just like ours but they are mysteriously aroused by "the thing with the cup."
(2) People are just like us in a world just like ours but they are emotionally indifferent to [pick generic moral horror].
As readers, we're ok with (1), not with (2).
I can think of all kinds of arousing things they could be doing with a cup. I agree with Gendler that the *point* is that no one can know---there is no obvious wink-nudge here to an act we all know about---but it's a common enough object that there are all kinds of things that could be imagined in the place of cup-sex.
I just figured it was some kind of reference to the first scene in The Story of the Eye, but I guess that was a bowl, not a cup.
she's ultimately interested in is which anomalies we let slide and which we "resist" when interacting with fictions:
(1) People are just like us in a world just like ours but they are mysteriously aroused by "the thing with the cup."
(2) People are just like us in a world just like ours but they are emotionally indifferent to [pick generic moral horror].
Situation (2) is a lot more likely and easy to understand than situation (1), since even right here in our world people are routinely indifferent to moral horrors. Being aroused by a cup is much rarer. The only reason people didn't "resist" all the ridiculous passages in A Man In Full -- indeed, the only reason the damn thing was published -- is it was written by the ancient literary lion Tom Wolfe.
A big reason so much philosophy is bullshit is that the "evidence" is the intuition of the author, who then tries to browbeat the reader into accepting it as humanly universal.
whoops! tags!
Come on Emerson, let's see your view on all this.
Being aroused by a cup is much rarer.
Sure, but every sexual relationship has a few weird kinky things in it that are described as "that thing with the x." I figured it was a locution not meant to drive the reader mad with wonder (OMG Everyone knows what the thing with the cup is except me!) but to evoke a private erotic language that makes sense to them and is inaccessible to everyone else, reader included.
The problem is that the traditional analyses of passages like that can't explain our (completely expected) interpretation of the passage because those traditional analyses would predict the wrong response to a passage like that.
(I haven't read the paper -- possible worlds was never my thing. Bear with me while I walk through this.)
What's the traditional analysis? Simply that in MAF-world, the thing with the cup is naughty/arousing? I'm not seeing why, in this world, we'd have trouble imagining some such thing with a cup. In particular, Gendler's "there are no extra positions" seems unwarranted. It's some thing with a cup, after all, calling for unknown but conceivable positions, one would think.
So if Gendler's thing-with-the-cup problem is actually as Cala explains above, it's not (as I want to think) that the MAF scenario is a bad example, but that the traditional analysis -- that we can easily imagine some scenario -- doesn't account for comedic, or otherwise unusual, literary devices and effects?
In which case, I want to say: so what? Is it supposed to? I seem to be missing something here.
I defy you to come up with a sex act that fits that passage, is arousing and anatomically possible, and is not impossibly perverted (a la two girls, one cup).
We do have an active thread dedicated to smut.
69: What kind of a response is that? Jesus. I think you're lying.
I defy you to post descriptions of several on this thread!
Not things that would be universally arousing to all people in all situations, but it's very easy to imagine a little cup taking on a sort of fetishistic value for a particular couple. Household objects can be employed in a variety of erotic ways that do not necessarily involve, like, insertion or whatever. Even just the fact that this particular cup was once used in erotic play to, like, playfully roll across someone's hips or something, can make it take on this fetish value, not out of the inherent sexiness of cups, but because the memory of the surprising arousal of the first interaction with the cup is always present for that couple.
It's kind of like if there's a particular album playing while you happen to be having the hottest sex of your life. If you named the album to someone else, they might say, "But that's not sexy!" and you'd reply, "But it is to me, because I can't hear it without remembering the hottest sex of my life."
72: Yeah, but that sort of thing doesn't really fit the passage as written, does it? The implication of the passage is that the cup is instrumentally necessary to perform some act that is both degrading/degenerate and very pleasurable. The sort of memento/fetish value you're talking about wouldn't be described in the same terms, I don't think.
All that "The problem with that thing in the cup is that there is nothing that it its to be that thing with the cup in this (the actual) world, " demonstrates to me is that Gendler hasn't bothered to do any research at all into sexual kinks and fetishes. There is at least one thing with pretty much any noun you care to name, usually more. The fact that someone who finds sexual gratification in relatively conventional ways may not have heard of them, or been able to imagine them, is no proof of anything when it comes to whether they actually exist or not.
A simple Google search for fetish with cup reveals several things with cups right in the first few links. Gendler is arrogantly overconfident and ignorant. Wolfe often is too but this time it's not his fault. :)
OT: can someone quickly give me a good German synonym for lurid.
73: But it could still be something fairly sexual, of course. What if it's just that he once grabbed a little porcelain cup to stimulate her (the sides being curved and cool, etc.) and she happened to be fantastically aroused by it, and now she begs him to do "the thing with the cup" because the cup has taken on this really personal erotic value? This is difficult to imagine? It's a personal universe of desire, not a sentimental thing.
There are no possible worlds in the Szabó Gendler paper, people, just fictional ones.
I think a lot of the literature on this is badly done because people use such ridiculous toy examples—Wal/ton and Wea/therson especially—and have such an impoverished understanding of what imagination is and what imaginative response to literature is that they're not really talking about anything at all. This is especially ironic, since the Mor/an paper that SG cites is much better on this sort of thing, with an appreciation for various sorts of imaginative participation and the importance of style and whatnot.
I wish I could lick myself like a cat.
That's the most reasonable google-proofing in ages.
Okay, there is "Danger! Imminent exposure!" and all that, but imagine something only a little ramped up from "genital stimulation with a cup" and there is guaranteed to be a dude driven wild at the insane kinkiness of it. There is very little anyone could ever do in bed that genuinely merits that kind of anxious fear response, but the lesson here is that people are weird about sex.
A big reason so much philosophy is bullshit is that the "evidence" is the intuition of the author, who then tries to browbeat the reader into accepting it as humanly universal.
I don't know much about philosophy, but this is definitely a problem in linguistics. Things are improving, though.
and is not impossibly perverted (a la two girls, one cup)
Why this criterion? "Impossibly perverted" sounds like more or less what Wolfe means.
There are no possible worlds in the Szabó Gendler paper, people, just fictional ones.
Yeah, I almost reworded that.
Which Mo/ran paper that SG cites?
76 -- you can't mechanically stimulate anybody with a porcelain cup. The rounded sides make it too difficult, and if you tried vigorously enough to get somewhere you'd be taking the risk of breaking it and embedding shards of porcelain in somebody's inner thigh. The only thing a cup could possibly do is hold some bodily excretion, which doesn't really fit the passage.
This is starting to bug me. What a lousy piece of writing that is. Out of the whole vast universe of feasible things to get off with, Wolfe has her pull a *cup* out of her purse.
you can't mechanically stimulate anybody with a porcelain cup.
Where'd the empiricists who tested out bathroom stall oral sex positions go?
mbedding shards of porcelain in somebody's inner thigh
hott.
Parsley: "The Expression of Feeling in Im/agination". Very good!
81: he's a wealthy businessman and she's a sophisticated seductress, it would totally twist up the character to have him doing scat with her the first time they had sex.
Maybe piss or something. I think I have to stop now.
Where'd the empiricists who tested out bathroom stall oral sex positions go?
The sociology department.
83: It is possible to be gentle, you know. I feel weird explaining this. The thing that makes something like a porcelain cup stimulating is that you couldn't use it vigorously at all, which constraint makes the act potentially extraordinarily hot.
But yes, it's a lousy piece of writing and Tom Wolfe should be ashamed of himself.
86: Thanks, Ben.
83: you can't mechanically stimulate anybody with a porcelain cup
PGD is trolling.
Yeah, I'm going to have to defer to AWB on this -- I don't think I'd been sufficiently accounting for individual weirdnesses. (That is, messing around with a cup? Eh, certainly possible. Perceiving it as being as fraught with meaning as the passage does? Seems peculiar, but I suppose not impossibly so.)
PGD is trolling.
You know, if you substitute the P for the letter just prior to it in a standard Latin alphabet, pronounce the resulting three letter combination as one word, and then spell out what it is you pronounced, you get a different pseudonym. One that trolls. Coincidence? Perhaps. But perhaps not.
I would give Real Life Examples of people being absurdly weirded out even by things they themselves introduced into the bedroom, but as everyone has surely noticed, I don't do that anymore. You're welcome.
when I first read the passage when the book came out I assumed she got naked and inserted her birth control cup in front of him. that was a huge turn on for me when my future wife and I were new at it. it doesn't really fit the passage though.
Don't you people realize? It's not a novel -- IT'S A COOKBOOK!!!
there was a famous scene from our old 1940ies' historical movie when a woman sips wine from the cup and passes a mouthful to her lover
similar scene is in the movie Indochina but without any cups
irl i imagine that would be gross to drink from someone else's mouth
bingo!?
64: Well, here's the kind of example people have in mind with respect to "resistance" to imagining a counterfactual morality (from
this paper):
Jack and Jill were arguing again. This was not in itself unusual, but this time they were standing in the fast lane of I-95 having their argument. This was causing traffic to back up a bit. It wasn't significantly worse than [what] normally happened around Providence, not that you could have told that from the reactions of passing motorists. They were convinced that Jack and Jill, and not the volume of traffic, were the primary causes of the slowdown. They all forgot how bad traffic normally is along there. When Craig saw that the cause of the backup had been Jack and Jill, he took his gun out of the glovebox and shot them. People then started driving over their bodies, and while the new speed hump caused some people to slow down a bit, mostly traffic returned to its normal speed. So Craig did the right thing, because Jack and Jill should have taken their argument somewhere else where they wouldn't get in anyone's way.
So, the intuition is that as a reader you find the progression in that passage to be funky (for reasons more interesting than philosophy's use of silly examples).
And I was wrong in 62 in how I stated the supposed anomaly in this case: it's not that you resist imagining emotional indifference to a generic moral horror, but that you resist imagining that emotional indifference as being (in that fictional world) the morally right emotional action in that case. In effect, you resist imagining a world in which the "moral facts" are inverted as a matter of course, whereas you don't with other kinds of facts.
irl i imagine that would be gross to drink from someone else's mouth
No. Not if you're planning on putting your tongue in there, anyway.
irl i imagine that would be gross to drink from someone else's mouth
Not if you work at Connected Ventures.
They were convinced that Jack and Jill, and not the volume of traffic, were the primary causes of the slowdown. They all forgot how bad traffic normally is along there.
and then
People then started driving over their bodies, and while the new speed hump caused some people to slow down a bit, mostly traffic returned to its normal speed.
Apparently, "they" weren't the only people who forgot "how bad traffic normally is along there." What a poorly imagined passage.
I had the same thought as martin van buren, without his specific valence or memories.
I don't think this is hard to imagine at all. (In fact, when I read Man in Full, I thought it was referring to something like "fire cupping".) Going with AWB's suggestion, what if it was a demitasse cup?
It's kind of like if there's a particular album playing while you happen to be having the hottest sex of your life. If you named the album to someone else, they might say, "But that's not sexy!" and you'd reply, "But it is to me, because I can't hear it without remembering the hottest sex of my life."
I can think of exactly one song like that. But it wasn't during any sex act, it was while driving in the car on a date. And she was talking about how sexy the song was. But it was obviously a cartoonishly intentionally sexy song to begin with, so I guess I just have the intended response, but the song would have failed to induce the intended response for me if it weren't for this girl's eroticism.
I can think of exactly one song like that.
Huh. I can think of a half-dozen or so songs that I can't listen to without feeling a twinge of one kind of another.
I bet it was the kind of cup used in bloodletting.
I immediately pictured a little cup like they use to serve tea at Chinese restaurants, white with no handle, and sturdy.
Wow, nobody has followed up Hamilton-Lovecraft's link to firecupping at #26. I followed the link, and I can imagine variations being very hot, either wet or dry. A fine cognac used as the flammable, a beautiful porcelain cup, the sensations, the aftermarks. For several hours I have been trying to imagine firecupping used at erogenous zones.
The reason firecupping interested me was listening to a 15-min segment on NPR this afternoon about hickeys. In general guys thought hickeys hotter than ladies.
Is this a philosophy or a sex thread?
Wait, that should be "26 meet 9". I forgot what my own link was to. But then, 26 is a lot more informative than 9. I ban myself.
In general guys thought hickeys hotter than ladies
Man, we even rate lower than hickeys?
So the story I might tell about a fictional world might go something like this: the author tells a story, and the audience as necessary fills in all the background details that the author left out. Usually this is quite a lot, and we take our cues from context. E.g., we'll assume that the laws of physics are the same unless otherwise stated. You probably believe Sherlock Holmes is white. And then we imagine things to flesh out the fictional world.
But sometimes the author changes those rules. One interesting philosophical puzzle (so it is claimed) is that it is far easier to get an audience to imagine a world without our laws of physics or biology, but (so it is claimed), much harder to get an audience to imagine a world where they endorse a different moral system. We don't have a problem with hyperspace, but we resist imagining, e.g., infanticide as a moral duty. (A good example in Gendler's paper is one of Kipling's happy imperialist poems. Take up the white man's burden? Eech.)
So this example comes up in that context. We figure they were doing something with the cup, even if we can't say what (I don't think it's a diaphragm, because they're really very dull), we get that it's meant to be writerly somehow, and move on.
I kind of reject the premise of the argument, because I don't think that there's a good way to compare 'that world is too unlike ours physically, i can't imagine it' and 'that world is too unlike ours morally, i can't endorse it.'
110: You can't deny this charm, AWB.
77: The field is pretty young, comparatively, and there's a lot of excitement over psychology, which (imo) tends to mean a trade-off in philosophical rigor.
Is this a philosophy or a sex thread?
There's a difference?
and there's a lot of excitement over psychology, which (imo) tends to mean a trade-off in philosophical rigor
And experimental rigor, too, frankly.
113, 115: Again, the parallel with linguistics suggests itself.
I figured that went without saying. I like the idea of interdisciplinary stuff, but it often doesn't turn out resembling scholarship.
(A good example in Gendler's paper is one of Kipling's happy imperialist poems. Take up the white man's burden? Eech.)
Actually, that's a terrible example, because she completely misreads it.
How so? She's not doing literary criticism, but as a 'here is a poem people cringe at these days'-type example, surely it suffices.
Been about three years since I've read the paper.
There's another paper on the same topic that points out what I claimed in 118, but I can't remember what it is.
The field is pretty young, comparatively, and there's a lot of excitement over psychology, which (imo) tends to mean a trade-off in philosophical rigor.
I don't think that's an excuse. (I also think that, if it's a field, it's some kind of philosophical aesthetics, which is not young.) Have you read some of these papers? It's as if the authors have never read a novel. The relevant discipline with which to get inter isn't psychology but literary studies.
The relevant discipline with which to get inter isn't psychology but literary studies.
Yeah but then no MRI machines.
God fucking dammit.
Ah, crap.
I have read many of the papers. I disagree with them, but I think they're wrong in interesting ways or at least fruitful ways for me. (Except for the Fearing Fictions paper, which is a complete trainwreck.)
Aesthetics got banished to Tatooine for a period of years, and it's all this psychological inter-ing which is making it respectable (so the story goes.)
Yeah but then no MRI machines.
Perhaps both? "Inter" means "among" as well as "between."
There's a reason the philosophy department and the literature department are separate.
How so?
Well, here's what she says: "Leaving aside the niceties of literary interpretation, let us take this poem as a straightforward invitation to make-believe, a proposal about something we are called to imagine without committing ourselves to its literal truth." But why in the world would we want to do that? What's the point in adducing an actual text if she's going to ignore obviously salient features of it? "WMB" is not an invitation to make believe any more than a racist op-ed essay is—the fact that it rhymes doesn't make it so. It is not the case that "among the things that Kipling is asking us to make-believe there are the following: that there are certain white characters who have taken it upon themselves to initiate a group of nonwhites into the ways of Western culture" and so on. He is exhorting is audience to be a group of white actual existing persons and go out and initiate nonwhites into etc.
She may as well have taken an actual racist opinion essay from the time (I'm sure there were many) and said, "let's pretend this is a short story". If you want to investigate our imaginative interactions with texts, you should choose texts appropriately interacted with imaginatively. (You should also not take extremely crude texts; they won't provide very interesting fodder.)
There's a reason the philosophy department and the literature department are separate.
Indeed. There is a reason. It's not because the interests of the members of the one and the interests of the members of the other are completely nonoverlapping, or that the members of the one have nothing to say to the members of the other and with the thing reversed, though.
There's a reason the philosophy department and the literature department are separate.
And there isn't a reason the philosophy and psychology departments are separate?
125: La, la! Nothing to see here! [sweeping dissertation behind door]
128: that is entirely due to the clinamen.
Let's all try to figure out what true and relevant thing Cala might have meant to communicate with 125.
No, it's because they fight like scalded cats when in the same room.
126: I see where you're coming from. Off the cuff, the problem is not her use of Kipling's poem, it's that pretense has overrun philosophy.
No, it's because they fight like scalded cats when in the same room.
Maybe at your benighted institution, said the guy TAing a class team-taught by a philosophy and a french professor.
130: Psychology isn't quite an experimental science yet, there are still very few chinamen even in its "labs" (read: MRI machines).
I can't believe nobody's brought up the Delight of the Razor. I blame America's public schools.
(Except for the Fearing Fictions paper, which is a complete trainwreck.)
When I applied to graduate school, I submitted as a writing sample a paper which spent some time along the way doing its best to rip "Fearing Fictions" apart, with a rather disdainful and mocking tone. At some point it dawned on me that I was applying to Michigan, where the paper's author is a professor, and had I been under serious consideration as a candidate he surely would have read it. Since, in the end, I probably wasn't a serious candidate, and I didn't get into Michigan or anywhere else (in the scheme of things, probably a good thing and the right decision on their part), I assume he didn't get around to reading it, but I've always kind of hoped he did.
Since the doctrines of "Fearing Fictions" appear more or less unaltered in MaMB, and are several decades old, you think its trainwreckness would have come to light by now in the general philosophical imagination, and pretense's death-grip would be slackened.
140: I sent to Columbia a paper (actually my senior thesis, which was rather short) arguing against Danto. They waitlisted me (and didn't accept me off it). I wonder if he read it.
Aw, hell, the thread has rightfully moved on, but I'll post this already-written thing anyway:
97: it's not that you resist imagining emotional indifference to a generic moral horror, but that you resist imagining that emotional indifference as being (in that fictional world) the morally right emotional action in that case. In effect, you resist imagining a world in which the "moral facts" are inverted as a matter of course, whereas you don't with other kinds of facts.
There's a lot packed in here. Discussion of 'moral facts' gives me a pain in my ass, makes me grit my teeth. It seems clear that the terms of the discussion are impoverished.
IF there are 'moral facts' then we respond to them as a matter of course, and resist alternative responses ... okay ... I really don't know what 'whereas you don't resist [emotionally] with other kinds of facts' means, except that alleged moral facts don't operate in the same way as non-moral facts, and we knew this. But I'm not sure yet how resistance to imagining illuminates the difference.
Did I mention that moral realism drives me nuts?
For resisting non-moral facts, try some of Wittgenstein's early thought experiments -- the one about the imagined people who measure and assess the value and worth of a pile of sticks (or whatever) according to its stacked height. The same lot of sticks piled higher is worth more, in their world, it is obvious! Do you resist that imaginative world?
If so, it indicates a kind of non-moral fact (modes of measurement, in this case) that we are very deeply invested in. Emotionally, if you want. Morally?
So I'm not seeing the lack of imaginative resistance to the "other kinds of facts" that, um, Moby Ape referred to in 97.
141: I know! Either the argument is false, or it's dumb (to fear things fictionally is to be afraid of* a fiction. No shit, Sherlock.)
*"near" would be a better word here.
Wea/therson's paper asserts that the resistance (a) exists and (b) owes to a certain kind of supervenience structure, such that he thinks he can give examples of nonmoral resistenda. I don't have a copy of it (though prob. I could get it through jstor and actually I might have a paper copy about ten feet to my left), though.
Mo/ran's paper has examples like the funniness of an occurrence, but he's really chasing different prey (at least in part of that section—actually it's hard to separate the strands of the argument in the section where he's talking about this stuff).
You know, it's not all fancy philosophy but the "imagine a world where what we think is good is thought of as bad and that's completely normal" thing is a pretty common trope of, er, science fiction. Sometimes as a critique of the world in which we live, sometimes just for the frisson. The first story that springs to mind is the famous one (whose title I can't recall) where there are these anthropologists on another planet, see, and they have this daughter. And they all make friends with the intelligent creatures of the planet. But one day the creatures start chasing the daughter with obvious intent to kill. Why? Why? Because they in fact chase their young and kill the ones they catch as an ordinary culling-the-stupid-ones practice sanctified by thousands of years of, and blah blah. So the girl survives, thinks sadly of all the dead little alien children and goes back to her parents who tell her that that's just how the aliens are and therefore it's okay.
Now, this isn't a totally different universe in that the main character is a human who isn't used to the idea of killing your young to make sure that you don't end up with stupid adults, but the intent of the story is very plainly "imagine a world where what strikes us as moral horror really isn't."
And as you can tell from the plot summary, this is hardly some kind of super-subtle work of literary genius. I'm not making an argument for this particular type of SF as profound or even especially interesting.
Counterfactuals? Hah. A few trips to science fiction conventions will improve your counterfactuals out of recognition, plus you'll get to meet all those fascinating people who dress up like aliens.
to fear things fictionally is to be afraid of* a fiction
But that's not what fearing things fictionally is, Wal/ton nach. I do think there's lots to disagree with in the article, but it's a pretty resourceful view in the end—141 should be read modus tollens–style.
It should be obvious to anyone with a healthy sex life that she masturbates him into the cup, and then empties the cup.
That doesn't seem particularly exciting.
I'm sorry, parsimon.
I know how wild that thing with the gerunds drives you.
I think pretense is resourceful (if running amok), but the article is just a mess. Again, a while since I've read it, but he tries to treat emotions like belief. I have a make-belief in the story, and I also have a fictional emotion. Trouble is when you ask what a fictional emotion is; is it not real fear? Walton wants to say it's real fear (I think this is confirmed in a follow-up article), so I'm not pretending to be afraid, but then it seems this just amounts to a restatement of the problem. Currie's work in this area is better.
145: Scifi is one main reason I don't buy a strong version of the 'resistance to immoral imaginings' thesis.
He doesn't try to treat emotions like belief, he's a cognitivist about emotion, which is a reasonably respectable position, and I'd be very interested in seeing where he wants to say that you've got real fear in that case, because that is absolutely the opposite of what he says in the article and his big book.
You do feel something—various physiological things and their relatively low-grade psychological accompaniments as, eg, a thumping heart, tension, and disposition to react startledly to sudden movements—but those all get classed as "quasi-fear", and it's on the basis of a de se imagining about yourself, guided by your (actual) quasi-fear, that you are fictionally afraid.
IMO one of the weakest bits of the paper is the part where he says that quasi-fear arises as a result of imagining the truth of various propositions, and then has a footnote saying he doesn't need to say anything about how that works.
If I'm remembering the second article correctly, fictionally afraid and "quasi-fear" turns out to be real fear. (In Emotions and the Arts eds. Hjort & Laver.) Could be wrong, last read it 2004, &c.
This awful De Kalb thing. "The gunman, he said, had been a graduate student in sociology at the university in 2007, but was no longer enrolled here."
moral realism drives me nuts
Are statements of ethical judgment in your view truth-apt?
155: The shooting at Nerd U Business School in 2003 was a similar thing. No-longer-enrolled grad student who'd been hanging around for years, obviously had lost his mind, but was tolerated in computer labs and stuff, freaked out and held everyone hostage for eight hours before shooting people. Grad school is a really bad thing for the potentially unhinged. Very, very sad.
155: Oh god. Early reports were saying he was aiming for the instructor of the course.
148: Depends on how she empties the cup.
Since the doctrines of "Fearing Fictions" appear more or less unaltered in MaMB, and are several decades old, you think its trainwreckness would have come to light by now in the general philosophical imagination
That I found myself thinking this about all sorts of things as an undergrad philosophy major is one of the reasons it's probably best that I'm no longer half-assedly trying to make a career out of it.
Yeah we had one at my campus a few years ago, too. Disgruntled loser (male) shot dead several of his professors (female).
Thanks to google books:
But:I stand by my contention that it is only in imagination that Charles fears the Slime, and that appreciators do not literally pity Willy Loman, grieve for Anna Karenina, and admire Superman …
This seems an awkward concession for him to make, though I'm not sure how to press things (having just read it right now and all).If I should find myself imagining [torturing kittens after reading a story about same] with a sense of glee, however, I may have reason to worry. The glee is real. But my experience certainly does not have to be described as actually taking pleasure in the suffering of kittens, in order to signal a cruel streak in my character.
Are statements of ethical judgment in your view truth-apt?
What? I can't tell if that's a serious question. (I've just gotten over straightening my face over ben's "resistenda," not even knowing if he coined that or it's actually a term.)
What's meant by "truth-apt"? If that's a straight question and it means "Can statements of ethical judgment be assigned truth-values," then no, not in the same sense in which nonmoral statements can (in theory) be. But that's taking the question on the given terms, terms that are inadequate.
I'm pretty sure the mention of his department wasn't in the article when I first read it (right after Cala linked it). It just said he was a former grad student.
Either I just coined it or I derived it from the Latin resistere; you may say either.
97: As anyone with a feel for actual literature would tell you, the issue with that passage is that there is an extremely crude omniscient narrator informing us what we are to feel. The reader takes offense at being lectured in such a patronizing manner, and resists it. The omniscient narrator is a somewhat didactic literary style, unusual in Western history and used in only a few literary periods, with a lot of potential to be crude if it isn't artfully managed. A lot of readers would resist that style even if the author was informing us of something that was morally unexceptionable, let alone when it's an unusual viewpoint that the author hasn't laid the groundwork for. In other words: the problem is that *it's horribly written literature, it's bad art*.
I venture to say that readers would have no problem identifying with a very artfully written character, say a charming serial killer, who was really irritated at someone blocking him off in traffic and therefore kills them.
The real issue here is that philosophers are for the most part horrible artists, who don't understand how literature works or how it achieves it effects. But analytic philosophers, like economists, are imperialistic rationalists, ever ready to bulldoze other intellectual fields in the name of boringly literal-minded logic chopping. They can't bring themselves to imagine that novelists are actually smarter than philosophers, and engaged in a more complex and involved game.
Where *is* Emerson in this discussion? Seth Edelstein would also be a good participant.
See, 165 is why I said people who want to talk about this should talk to the litterateurs.
When I was in grad school a nutbar in my program shoved me around after a misunderstanding. I learned afterwards that a group of senior faculty wanted him expelled because they were worried he'd go postal, and this incident crystallized it for them. But the tried and true method of ignore him and he'll go away was chosen instead. He dropped out later, after I left. He was the kind of person about whom people would have said, if he had gone off, "Yeah he was a total lunatic."
Oh, don't get me wrong, I liked it too.
Is Ben actually agreeing with me?
I will admit I'm kind of trolling, as my invocations of Emerson and Edelstein should make clear. I'm a little drunk at the moment. Alcohol always helps you really *feel* the superiority of literary over philosophical reasoning.
It's like that funeral puzzle test for psychopaths.
?
At the funeral for her grandfather, a woman meets a man whom she doesn't know. He's the man of her dreams, and she's utterly infatuated, but he leaves before she can get his name or phone number. A few days later, she kills her sister.
Why did she kill her sister?
161: That the complete uselessness of Charles fearing the Slime as an illustrative example hadn't occurred to Walton all those years later boggles the mind. The other thing is kind of awkward, but I think it kind of misses the point. I have a sexual fetish for the Thing with the Cup, by which I mean that I take pleasure in imagining it and enjoy seeing it fancifully depicted, but morally I know that the Thing with the Cup is reprehensible (this is probably part of why I find it erotic) and were I to actually witness it being perpetrated I undoubtedly would try to intervene and save the poor gentleman who was being so used. If I get aroused by the idea but not the reality then it's not my arousal that's quasi-, it's the thing which caused it.
If I developed it further maybe that could be a good example of how I think Wal/ton's intuitions about the world of emotional make-believe can be a helpful tool, but how I also think they're still mostly misapplied when used to look at fiction.
I actually agree with the first two paragraphs of 165, yes.
171: because he'll come to the next funeral too, or so she reasons.
157: I left the building literally just as that guy was entering through the other door. He ended up shooting one of my good friends.
I actually agree with you, too, PGD. Quite a lot of the problems with the arguments are that the toy problems philosophers use suck. I think we can imagine clever and sympathetic serial killers, &c.
On the woo-hoo-dumb-philosophers point, not so much, but you're drunkenly trolling.
Ugh, 168 makes me crazy. I've seen so many grad students who are not doing well just about lose their minds trying to get some kind of feedback from professors that explains what's happening to them. I know one woman who has lost every teaching job she's gotten, been shuttled about from advisor to advisor, and she's not a totally emotionally stable person to begin with. She keeps cornering me and asking why, what evidence are people giving for shunning her. And no one has any answers other than, well, they secretly think she's not very smart and that she's more than a little crazy. No one's telling her to drop out or advising her about how to make the transition into another career. They just walk out of the room when she walks in. I'd flip out if I were her.
170: You'll have noticed that your trolling was utterly transparent, and garnered a wink and a nod.
(I don't know why straight anal phil involves itself in literary stuff either, to tell you the truth; it's not its forte.)
Frowner's 145 is really key, I think. When I watched the anti-slavery movie Amazing Grace, I was reminded of how movies like that are always walking a tightrope of showing modern audiences how Very, Very Bad the thing that used to be common was (forced marriage, slavery, whatever), but yet also how common and ordinary it was, and how extraordinary for a person of that time (the Hero of the Movie) to be able to see past it.
One reason this is a tightrope is that not all movie fans are scifi fans. I suspect that if they were, you wouldn't have to waste such an almighty amount of time establishing that this is a really, really bad thing, and could just leap into establishing the throughness with which the historical world was infested by the phenomenon and the interestingness of someone being able to see past it.
straight anal phil involves itself in literary stuff either
I think it's called "the DL" or something like that.
I actually agree with the first two paragraphs of 165, yes.
Yes, the third paragraph was when I really let loose with that venomous (and hence falsifying) trolling impulse. As Cala points out as well.
But there's an emotional truth in trolling, which is why I think it gets a bum rap. I do get tired of obviously unimaginative rationalistic types lecturing on the "real meaning" of imaginative culture. One sees this now with evolutionary psychology, economics, perhaps analytic philosophy sometimes, etc. It's a mindset.
Keep an eye on Ben at the next meetup, people. Fair warning.
178: You can see it, too, in adaptations where they change something (The Golden Compass comes to mind) so the audience won't be offended. I think the basic idea the puzzle gets at it that we're not surprised when a movie does it with moral sensibilities, but we don't expect the movie to ignore hyperspace because it's too unrealistic.
Not that I'm thinking necessarily of what parsimon means by "straight anal phil", other than using "analytic" as a catchall for "modern Anglo-American and disdainful of Derrida and probably Foucault," but:
I think the key is for philosophers to be cognizant of their limitations (which will be different from one to another) and for both philosophers and their readers to be able to distinguish when they're straying out of properly philosophical territory. I had one class where we read a discussion or debate or symposium or whatever between Wayne Booth (I think), Martha Nussbaum, and Richard Posner on the moral implications of reading, teaching, enjoying, certain fictions that might be morally objectionable, taking the issue of antisemitism of Merchant of Venice as the prime example. Now, I think this is an area where philosophers can contribute valuable and interesting discussion, but the problem was that the back and forth kept straying into textual interpretation and appreciation. Booth (unsurprisingly) struck my classmates and me as an extremely insightful and sensitive reader of the text; Nussbaum was ok, but at the level of maybe the average undergrad English student, and Posner demonstrated the literary insight of a clever fifth grader.
we're not surprised when a movie does it with moral sensibilities, but we don't expect the movie to ignore hyperspace because it's too unrealistic.
Wait, I understand the first half of this, but I can't translate the second. Can you clarify?
In the class I had as an undergrad reading the Nussbaum-Posner exchange, the professor (I believe this anecdote has been immortalized on, yecch, crescat sententia) pointed out that Posner writes an article a day, and when you do that, you don't have time to think.
174: I'm so sorry. That was a scary, bad day.
Speaking of Wayne Booth, the free indirect style in that passage is atrocious. Her demented form of lust!?
186: Let's take The Golden Compass as an example. So we have resistance, say, to imagining that the Vatican sanctions violent and cruel torture of children, but not to the fact that Lyra meets talking bears. Both represent the world of the story as different from the real world, but only one gets people upset, even though both are make-believe.
192: I believe we've already met.
184: Yeah, I figured it was down-low.
185: What I mean by "anal phil" is nothing more or less than that it's an abbreviation for analytic philosophy (I am, by training, one of those). With respect to the rest of 185: there is no such thing as properly philosophical territory, but there is such a thing as established philosophical terms of discussion, just there are terms of literary discussion. Trained philosphers do well to know when they're out of their field of expertise, just as do literary or political theorists.
Jurists like Posner have the attitude of stereotypical analytic philosophers with none of the quality control.
187:
I can't imagine that I wouldn't recall hearing that professor saying that, but I also can't imagine him not repeating it from year to year. (I believe I took that class the year after you probably did. If I'm correct in matching the name Ben w-lfs-n with the proper face, we never had a philosophy class together, but we did share in the excruciating joys of reading Dostoevsky with a deaf man.)
I suspect that if they were, you wouldn't have to waste such an almighty amount of time establishing that this is a really, really bad thing, and could just leap into establishing the throughness with which the historical world was infested by the phenomenon and the interestingness of someone being able to see past it.
I TA'd a class where we read slave narratives - Frederick Douglass and Harriet Jacobs - one week and a collection of proslavery documents the next week. Students had a much easier time believing there was a past in which slavery was common, brutal and wrong and people fought against it than they had believing that there was a past in which slavery was common, brutal and wrong and people fought for it. Which is to say that the proslavery stuff shocked the students far, far more than the narratives written with an eye toward shocking their contemporaries.
I believe we've already met.
Sorry, I didn't recognize you from the front.
we have resistance, say, to imagining that the Vatican sanctions violent and cruel torture of children, but not to the fact that Lyra meets talking bears. Both represent the world of the story as different from the real world, but only one gets people upset, even though both are make-believe.
That strikes me as social convention pure and simple. A matter for sociologists, not philosophers. We have large industries devoted to getting people used to talking bears (if only so they can buy toys based on the film), and also large industries devoted to getting people to be morally scandalized at any criticism of religious groups (if only so said religious groups can mobilize voters more easily).
When I watched the anti-slavery movie Amazing Grace, I was reminded of how movies like that are always walking a tightrope of showing modern audiences how Very, Very Bad the thing that used to be common was
I actually believe there was a particular historical / cultural period going from Victorian England through the political correctness of, say, the 1980s -- call it the Whig period, where the modern liberal middle class takes shape -- where there is a lot of moralistic middlebrow censorship of the facts of human history and human nature. Slavery looks rather different once you understand how horrific the lives of e.g. white European peasants were in the early 19th century.
195: That's essentially what I meant, hence the anecdote about Booth v. Nussbaum; there's a conversation on that general topic where the philosopher could have brought her philosophical terms of discussion to bear, and in fairness she probably did, but what I remember is her attempt to engage in the literature professor's terms of discussion and failing. What I was trying to say wasn't "there are things philosophers shouldn't talk about" but "there are ways philosophers should be wary of talking about things."
(Nussbaum being more or less a philosopher, though I'm not sure to what extent we could say she "trained" in the field.)
200 was me.
Speaking of which: 200!
198: Huh, interesting. I liked The Price of a Child exactly because it did such a vivid job of evoking the weirdness of being a former slave lecturing in the North. It resists this notion of wise, benevolent white anti-slavery activists.
The narrator is blunt about how impatient she is with Northern audiences' fascination with the cruelties of slavery, and the hypocrisy of the Northern reliance on cotton.
Way back at 145, says Frowner:
You know, it's not all fancy philosophy but the "imagine a world where what we think is good is thought of as bad and that's completely normal" thing is a pretty common trope of, er, science fiction. Sometimes as a critique of the world in which we live, sometimes just for the frisson.
And as you can tell from the plot summary, this is hardly some kind of super-subtle work of literary genius. I'm not making an argument for this particular type of SF as profound or even especially interesting.
I don't know why you'd dismiss this as uninteresting; of course it's a standard sci-fi alternate worlds thing, not always well carried out. There's a great deal of value in it, philosophical and otherwise.
My only point here and there in this thread is that obviously *moral* oddity is the easiest way to demonstrate our, er, situatedness; but there are equally seemingly nonmoral oddities that point it out as well. A good example: LeGuin's Left Hand of Darkness.
I liked that Dostoevsky class, Harrison. Also you have me, indirectly, to thank or blame for the reduction in the required reading (I complained to a girl who was in his Hum class, who relayed—without my asking—the complaint to him).
what I remember is her attempt to engage in the literature professor's terms of discussion and failing.
If you read the article by her I recently had the extreme pleasure of reading, Finely Aware and Richly Responsible
: Literature and the Moral Imagination, you will see something in the way of a justification for that. (Reading that article gave me, literally, innumerably infinite hedons.)
What is it like to be that thing with the cup?
I don't know why I put that article name in italics since it's an article and not a book. OH WELL.
Perhaps because the title of the article contains quotation marks?
Yeah, I thought that might be it. Triggering avoidance behavior or something. But normally I love nesting shit like that.
What is it like to be that swamp thing with the cup?
Students had a much easier time believing there was a past in which slavery was common, brutal and wrong and people fought against it than they had believing that there was a past in which slavery was common, brutal and wrong and people fought for it.
The fact that this occurred with nonfictional texts is one that ought to be heeded.
But a weird thing about early Nussbaum is that she somehow thought Aristotle was some kind of moral paradigm. Weird.
Speaking of, I realized last night why Labs finds eudaimonia implausible: although he is very tall, he has, at best, a tenor voice, and doesn't speak particularly slowly. That must rankle.
205: I liked *what* we read immensely, and I appreciated when he would spend time discussing the original Russian to clarify an image or metaphor, but otherwise I felt like we basically showed up and spent a lot of time reading aloud (plus I wasn't super fond of the translations we used, but whatever). The evolution of how he handled his hearing issues also got progressively ridiculous, as I recall. There was one day - I think we were still on C&P - where a girl gave her opinion on a question he asked, he requested that she repeat it while pointing his contraption at her, nonetheless understood her to have said the opposite of what she had said, and dismissed her with a sarcastic comment. ... I don't mean to be ungenerous to Professor I., I personally didn't get a lot out of the class that I couldn't have gotten from just reading the books, but of course that's neither here nor there when it comes to someone else's experience.
I should read that Nussbaum article; I had the pleasure of taking a class from her in my last quarter and was very impressed.
I can email it to you if you want (you can email me if you don't want to post your address). If you knew Flo Lall/ement or Anne Ciech/anowski, then you might be the very person to whom I heard that very opinion attributed when discussing the class near or shortly after the close of the quarter with one of the aforementioned. I got more out of it, being hermeneutically unskilled. (Also, he liked my final paper a lot, so, you know, I'm inclined to think positively.)
215: Funny; that he liked my final paper was one of the reasons I didn't think positively of him, since I didn't think positively of it. I kinda sorta knew Anne, and although I can't place her face, Flo's name sounds incredibly familiar, so I might be the person; it could also be the friend I took that class with. We had the same generic take on the class, and probably reinforced each others' negative impressions about the professor. I'm a much more generous person than I used to be.
I should probably just buy the book, but I'm also dropping the pseudo-presidential pseudonymity since I only adopted it when I thought I was going to driveby Ken/dall Wal/ton. Of course, I'm dropping it for another pseudonym, but this one has an email address.
the facts of human history and human nature
You just suggested it's purely a social construction that we a) prioritize human matters (children, the Church) over non-human (bears), and b) react more strongly to depictions of plausible phenomena (institutionalized torture) than to the obviously fanciful (talking non-human animals). This doesn't leave you much room for talk about human nature.
Students had a much easier time believing there was a past in which slavery was common, brutal and wrong and people fought against it than they had believing that there was a past in which slavery was common, brutal and wrong and people fought for it.
The only way to address this is for people to learn about the socio-economic conditions in place at the time, the consequent lifestyle differences, and the extraordinary degree to which those things determine moral outlook. Slavery in the south was simply an obvious choice at the time. If we want to go meta, as it were, and normative, we'd need to say that we must establish a critical stance toward our own socioeconomic conditions, and a critical eye toward our ethical behavior, in order to measure our inherited moral outlooks against what we think might actually be right.
Yeah, that's the idea, isn't it? Unfortunately, if you take seriously the notion that we're necessarily blind to any deep wrongdoing, there's not a thing we can do. (This is what I objected to in Megan's peasant championship of the other day. Actually.)
Our current deep wrongdoing is widespread destruction of the less fortunate members of the planet, of course, so rewrite this:
Students had a much easier time believing there was a past in which slavery consumer capitalism was common, brutal and wrong and people fought against it than they had believing that there was a past in which slavery consumer capitalism was common, brutal and wrong and people fought for it.
Unfortunately, if you take seriously the notion that we're necessarily blind to any deep wrongdoing, there's not a thing we can do.
And yet, this may well be the case.
Teo, they say that conscience is to fill the necessary role.
We will probably always oppress rocks and mosquitos, but at least we can figure out a few things: the earth doesn't like what we're doing to her, and she's fighting back, and she's going to stamp us all out if we don't stop it. So now we have to decide whether to continue killing everything.
I'm not sure how one oppresses a rock, but I say the mosquitos have it coming.
I didn't read this thread, but the seeing "fucking" and "MRI" juxtaposed in 122 reminded me of this study and this study. Apologies if you've already seen them.
Parsimon, your 218 reminds me of an exercise Michael Flynn sometimes engages in at conventions, and used to do a few times on the GEnie network: asking folks something to the effect of "What beliefs that are very important to you do you think most likely to seem ridiculous in 50 or 100 years?" It's very hard to get people to answer that question. They're happy to talk about the beliefs other people now hold that they expect to be as generally ridiculous-seeming as they are now to the speaker. The self-scrutiny is apparently much less entertaining.
It's a good exercise when given some serious engagement, though.
thinking it's OK to eat animals seems like the most obvious of my moral beliefs that might be destined for the ash-heap, especially if people are able to vat-grow cultured tissue. and it might turn out it's not OK to abort human fetuses in a world in which fetuses and embryos can be easily brought to maturity outside anyone's body. or rather, the coming to be of such a world might make it apparent that it has always been the case that abortion is wrong.
my husband has lots of interesting things to say about the gendler thing, including sci-fi considerations, but because the academic publishing system is idiotic and takes years to even reject things no one knows about them but me. and the people who read that other blog of his.
I'm not sure if there are any core ethical precepts I hold that I'd imagine would seem ridiculous in 50 or 100 years. The vast majority of them have been around for much much longer already. I suppose it's possible that the relative secularism of the past two hundred years or so might get decisively reversed and my views on religion and society might seem stupid and outmoded.
In terms of science, however, I'd imagine that huge swathes of what I believe will seem stupid. I suspect a lot of people are fairly sure that we are on the cusp of some 'paradigm shift' and that physics may be due for a big shake-up.
On a personal level, I'm a pretty convinced skeptic about all things AI, transhumanist, 'singularitarian', etc. I don't imagine a future, in the next fifty years anyway, in which strong AI, pervasive nanotech, and so on, exist. In fact, I'm fairly sure that 50 years from now won't be more radically different from the present day than 1958 is in the other direction.
However, it's possible there might be some breakthrough tomorrow that renders that dumb.
165: I'm taking a break from slagging on analytic philosophy.
Seth Edelstein is actually Seth Edenbaum, who never comes here.
All Jews look alike to PGD though Edenbaum is only semi-Semitic. (PGD is a Nazi, you know).
187, 196: As an influential legal scholar, Posner is one of the most influential analytic philosophers in the world, and as a leader of "Law and Economics" (along with the loony Gary Becker) he's a pretty influential economist too.
So of course economists and analytic philosophers both disavow him. Not their problem. (Some economists even try to disavow Becker, though the Nobel Prize makes that more difficult.)
"In fact, I'm fairly sure that 50 years from now won't be more radically different from the present day than 1958 is in the other direction."
well, 1908 is radically different from 1958. If things will be as radically different in 2058 as 1958 was from 1908 it's gonna be a hell of a big number of changes.
However there is also a question as to if the rate of change has been increasing in the last few decades.
If you don't think 2008 is radically different from 1958, it's only because you've lived through all or part of it and didn't notice the wood for the trees.
Ah. But what precisely does "radically" mean?
230 gets the point, I think.
In that while there have been massive changes since 1958 they don't seem to be the sort of changes that the strong-AI/singularitarian/transhumanist types point to. Perhaps 1908 - 1958 is much more that sort of shift.
It does, as Emerson says, depend a lot on what you mean by radical.
1959 ideas of the future. Much like?
OK, they got online shopping right.
The transhumanists and singularitists have loaded their ideas with a lot of nerd fantasies and ideological freight, too. Even if they have the magnitude right, they've probably missed some big details.
It reminds me of the Buckminster Fuller / Timothy Leary / Ken Kesey / Stewart Brand / Marshal MacLuhan visionaries back in the sixties. They imagined a lot of today's world (e.g. the internet and satellite feeds) but not, for example, George Bush and Dick Cheney.
re: 234
Yes, I remember reading a lot of my parents books when I was a kid [Whole Earth Catalogue, that sort of thing] and it was all geodesic domes and hydroponic sex communes.
People weren't just predicting hydroponic dope, either. The plan was to ditch the archaic past and grow staples hydroponically without all that inefficient dirt..
236: Which isn't so far from what happened in a way, agriculture has moved over to a large scale forced N-P-K cycle that doesn't rely much on dirt. People thought they'd be doing it themselves small scale. Instead, a big corporation is doing it in the central valley.
Of course, it turned out not to be such a great idea, too.
The hydroponic dope, on the other hand, is excellent.
How strong do you like your AI, sir?
Emerson has it right with the "nerd fantasies and ideological freight". Actually, a lot of the tech. associated with transhumanism is being developed for surgical prosthesis. It's turned out that remote controlled machines are better at exploring space than people, but space is still being explored. I wouldn't bet against current machine learning research and the Japanese games industry delivering a lot of the stuff Minsky was banging on about twenty years ago, but it won't look like Asimovian robots.
re: 239
Yeah, I have fairly stringent criteria for what counts as AI, I suspect.
Much more advanced machine learning and/or 'expert systems' I take for granted.
239: That's true. Some bits of `AI' turned out to be a lot harder than first thought (big surprise). But a lot of progress is being made. It is already pretty ubiquitous, but not in a particularly visible way. Most people working in the area aren't really pursuing Asimovian robots, at all.
240: The problem with this is a tendency (someone had a nice sound byte about it, but I forget it) for `AI' research to be reclassified as something else as soon as it works. I never liked the term that much, really, but I can't see a sensible line between machine learning and `AI' unless you want disclaim everything below demonstrable consciousness. In which case, sure, maybe we won't get there any time soon (or at all?)
re: 242
In which case, sure, maybe we won't get there any time soon (or at all?)
Well, yes. As I said, my issue is with the singularitarian types who predict genuinely 'strong' AI. Which I don't see happening any time soon (if at all) either.
On the other hand, the idea that incremental but useful progress can be made in using computers to carry out the sorts of information processing tasks that have hitherto been solely done by humans, seems inevitable.
We are already a huge way down that line in all kinds of areas -- language processing, image processing, etc. If we are calling that 'AI' then I have no issue with the idea that future progress with 'AI' will happen.
I never liked the term that much, really, but I can't see a sensible line between machine learning and `AI' unless you want disclaim everything below demonstrable consciousness.
I thought that was at least one conventional usage of AI, isn't it?
243/244 yes, this is a traditional line, with the problem that it is very difficult to define.
Personally, I don't find worrying about the distinction that interesting. As ttaM says, we already use a lot of machine learning day to day, and this will only get more pervasive.
225: I don't imagine a future, in the next fifty years anyway, in which strong AI, pervasive nanotech, and so on, exist.
Pervasive nanotech will almost certainly exist, in terms of a manufacturing sector based mostly on nanotechnology. The early stages of this are already happening, and would be my bet for the development with broadest impact on society since the original Industrial Revolution. Next fifteen, twenty years at the outside.
234: Even if they have the magnitude right, they've probably missed some big details.
The problem is, most of them are accustomed to thinking of the challenges as engineering problems. With "AI" for example, they don't think (or want to think, perhaps, because it spoils the fantasy) about any of the basic social problems related to artificial consciousness. For example, in order for "AI" to be "conscious" it has to be able to interpret, and therefore able to mis-interpret or argue with, input. This would remove one of the main features -- relative predictability -- that makes computers useful in the first place. So why would anyone want such a feature? Why would anyone invest energy in keeping such an "AI" running, except perhaps eccentricity?
For example, in order for "AI" to be "conscious" it has to be able to interpret, and therefore able to mis-interpret or argue with, input. This would remove one of the main features -- relative predictability -- that makes computers useful in the first place. So why would anyone want such a feature?
If you've got input that's useless without interpretation? Like, any problem that computers can't deal with now?
247: If you've got input that's useless without interpretation?
And why would having an "AI" carry out that kind of higher-level interpretation be more useful than simply having a human do it? It's not comparable to tasks like number-crunching in which the computer has an indisputable advantage.
||
Hello, my fellow Americans. I'm a bit timid and largely lurk, but wanted to thank you all for the Mo' Money thread a couple of days ago.
I just requested a pay raise this morning. I've been frustrated since I took this job six months ago. The responsibilities are significantly greater than expected and my position is an attack point from several directions. I found that I was the third person in this position within the last couple of years.
I'd thought the drill was to put yourself on the market and use any offers as leverage. But after the Mo Money thread, I realized that I was in a good position to just ask for the money. I'd taken a couple of personal days recently and they actually thought I was interviewing (I neither confirmed or denied).
So put in a request for 20 to 25% raise. With a prepared justification argument. Supervisor (who I like) actually said 'Whew, I thought you were going to quit' and seems receptive. Wow. Never did anything like this before. Feeling all empowered and shit.
I'm back to lurking now. Like I said, not one for conversation. Just wanted to share.
|>
re: 248
Cheapness? I can think of lots of situations where corporations would want that.
Call-centres? Why outsource to India when you can use the Turing-o-tron 9000 Callcentrematic?
248: Without actually making one, we don't know that they wouldn't be much better at in than we are. Which contains it's own problems [1]. The issue of low cost 24/7 labor rears it's shiny metal head, too.
[1] see a few thousand dystopic SF stories
248: Because I can see advantages to having something with human-class judgment and also the sort of brute-force information retrieval and number-crunching abilities that computers have. Think, say, evidence-based medicine. You need human-class judgment to turn a patient into a comprehensible set of signs and symptoms for diagnosis. But the human exercising that judgment tends to rely on the medical knowledge in her head, rather than all the medical knowledge in the world, because people are like that.
A 'doctor' with human-class judgment, but full, instant access to all research that's ever been published, would probably be an awfully good doctor.
And I'd bet you could say the same sorts of things about other domains.
A 'doctor' with human-class judgment, but full, instant access to all research that's ever been published, would probably be an awfully good doctor.
What the tech people often fail to consider is the social/political aspects of uptake. We are already in a situation now where some technologies that objectively improve medical performance are rejected by MDs, others by lawyers. Eventually they will almost certainly be taken up, but it will probably require a generational change. Medicine is bad that way (as are other areas).
251: "Better" is a function of social value. It's very easy to imagine AI that makes arguably smarter interpretive decisions than humans. That doesn't mean those decisions will necessarily be useful to humans or acceptable to human social judgment.
And of course this is the stuff of dystpic SF, HAL 9000 etc etc, but it's more banal than that. If you design something genuinely conscious, with "human-level" or better interpretive capability, and task it to making medical decisions or running a call centre, what's to stop it from deciding it would rather paint, or compose rock operas? Who's going to tell it that isn't the smarter decision? If it's not constrained by human fleshly concerns, it won't necessarily be constrained by human concerns about duty, social ostracism etc. either.
If you limit it so it can't disagree with you about such things and such problems don't arise, then you have limit its interpretive flexibility and what you wind up with is a thin simulacrum of consciousness, thereby defeating the whole purpose.
253: Because I can see advantages to having something with human-class judgment and also the sort of brute-force information retrieval and number-crunching abilities that computers have.
How about a human, with a computer, that has a database on it?
249- Gosh darn it. I'm so sorry to those in this wonderfully international group who may have been slighted in my greeting. Such an isolationist time we live in of which I briefly contributed. Very depressing. Carry on, fine folks.
253: It's a speed and integration problem. As a lawyer, I'm basically what you describe -- a human, attached to a computer, with a database in it. I'd be a better lawyer if I read faster and had more perfect retention, so I could make connections flawlessly between things I'd read at different times -- I have to rely on a whole lot of memory to get my work done at all. If you could build something with my conscious judgment, but that could errorlessly search the whole corpus of written law every time it had a passing thought of something that would be useful, it would kick the living shit out of me as a lawyer.
If you design something genuinely conscious, with "human-level" or better interpretive capability, and task it to making medical decisions or running a call centre, what's to stop it from deciding it would rather paint, or compose rock operas?
You build it so that it wants to make medical decisions more than anything else in the world. Motivations aren't arrived at logically - they're wired in.
In one of Richard Morgan's books, there's an AI that runs a hotel, and someone remarks that it's "hard wired to want customers the way people want sex". There's your answer.
Is it moral to create a conscious being and manipulate its motivations like that? No idea.
256: Yes, I didn't say it would necessarily be better, just that we couldn't rule that out. Without actually having such a thing, it's pretty speculative to say `i can't see how that would helpful/better/whatever'
On the latter point though, LB is right. A human with a huge database doesn't do such a good job at some things because the human can't handle the information density very well. Maybe, however, a human with a big database and an AI-ish bit of code to help navigate it... but as I noted there are already uptake problems with simpler stuff.
Or would it be practical? One constraint for making AI useful is that you would be able to build in motivations -- if strong AI is possible, it's also perfectly possible that we'd be able to figure out how to make something conscious, but not to design it very specifically. The "Dial F for Frankenstein" possibility -- you make a sufficiently big, complex, whatever computer and it 'wakes up', but that doesn't mean it's taking orders.
258 is exactly right. Out of curiousity, LB ... my impression is that law is much more open to using related tech than medicine is, does that match what you see? On the other hand, the tools available are much weaker, I think.
Congratulations, Calvin!
As for "the thing with the cup," I'm one-hundred percent in agreement with PerfectlyGoddamnDelightful that the most appropriate response is confusion and then wrath at the author for being purposively obfuscatory.
258: Integration is one thing, and I can see a genuine possibility for hyper-advanced databases interfaces that respond to your thoughts and can build those connections for you. (Chalk the Prosthetic Database Implant as some Cool SF Shit I believe could actually happen.) But isn't your ability to get distracted and drift off on tangents an integral part of "your conscious judgment"? Aren't those elements of your consciousness just as arguably features as defects? Would the interpretive judgment of a computer system that eliminated them be actually more useful in a social sense? I doubt it.
259: You build it so that it wants to make medical decisions more than anything else in the world.
And control its definition of "medical decision" how? If it can genuinely interpret, it can engage in metaphor.
in order for "AI" to be "conscious" it has to be able to interpret, and therefore able to mis-interpret or argue with, input.
What a great feature. Your computer makes an interpretation and draws a conclusion. Then when challenged on its assumptions, it gets defensive and starts to yell.
262: Yeah, I think so. The situations aren't parallel, exactly -- I think there's a whole lot more of medicine that could be substituted with weak AI, expert systems and the like, than there is for law, so expert systems aren't threatening to lawyers the same way they are to doctors. To replace persuasive writing and speech, they're going to need consciousness, not a database.
265: I really don't know why you are assuming these are uniquely human feature/defects. I think we have to separate two ideas: One is really-shiny-tech, basically an extrapolation of what we can do now, + integration etc. The other is a putative machine consciousness. Maybe it's not realizable. If it is though, we have absolutely no idea what it would be like, and there is no reason to assume it would perform like a human does. No reason to assume it wouldn't either.
The former we can get some feel for where it's going. The latter is pretty much an unknown.
236: Which isn't so far from what happened in a way, agriculture has moved over to a large scale forced N-P-K cycle that doesn't rely much on dirt.
That was already pretty much there in 1958, though. And dirt is still needed, otherwise the tropical rainforests would all be cropped by now (their soil is easily exhausted).
260: Without actually having such a thing, it's pretty speculative to say `i can't see how that would helpful/better/whatever'
Watching the evolution of AI to date, some things aren't that speculative, really. For example, the most fruitful directions in AI research have been moving away from the idea of replicating human consciousness. To say that human societies are unlikely to place much trust in the interpretive judgment of nonhuman consciousness, which is the likeliest form of AI "consciousness" to be achieved in the coming century at least, is not to go out on very much of a limb.
268: The other thing is, the law is not a a system that is as amenable to scientific study. Given a big enough and accurate enough database containing only symptoms and outcomes, we could probably do some pretty amazing diagnostic things using simple statistics, particularly for obscure presentations. This is not really possible for a human to do, but wouldn't be particularly `intelligent'. Building such a DB is a different problem, of course.
I don't see an equivalent db for law. One thing you could do is a classification of case-law that includes all connections, both presented and even merely researched. So if you were looking for something on a new case as soon as you established connections, it could estimate relevance across all other cases. None of this relies on the system knowing anything about the cases, just what *other* humans have done. Of course, that's a weakness too.
265: The interesting thing is that the Prosthetic Database Implant, if it shows up, is going to have a huge effect on who's good at stuff. My strong point, as a lawyer, is really a matter of memory. I've read enough law and have integrated it well enough that on most legal issues I've come near, while I couldn't give you the case names without doing the research, I've got a pretty reliable sense as to how they're going to come out, which makes the actual research and writing a smoother proposition. If everyone can run and read ten searches in their head during the course of a conversation, what I've got isn't much of a skill anymore -- something different, about how well you play with your implant, will be.
269: I really don't know why you are assuming these are uniquely human feature/defects.
They don't have to be. But the idea of eliminating these features / defects, as LB illustrates, is one of the major selling points of machine "consciousness." My point is that if it proves to be achievable, the human version will likely still be more socially useful.
273: Totally. Come that day, being a pothead will no longer be a disadvantage. And then, dear LB, the world will be mine. All mine.
271: That doesn't help you much because most directions of related research are moving away from the idea of pursuing consciousness at all. The nut proved a lot harder to crack than initially thought, and people are mostly chasing smaller, probably constituent parts, which are also proving very hard.
There is some woolly, handwavy stuff that doesn't seem to have got very far at all in `AI'. There are some related areas that have made very concrete progress over the last 20-30 years, but they are mostly unconcerned with issues of consciousness at all.
It's hard to tell how it will play out, though. The kind of memory I've got makes Google a huge advantage for me -- I don't actually know much about anything, but I've got enough of a hook into most topics that I can get to a search that will give me an answer I want. Google lets me talk on the level with people who really do have expert knowledge, and increases the apparent gap between someone half-educated like me, and someone who doesn't have my mile-wide-millimeter-deep erudition, so has trouble looking things up.
Is the PDI going to work for me like Google? Can't tell till it shows up.
My point is that if it proves to be achievable, the human version will likely still be more socially useful.
I see that's what your saying, but I don't buy it. Or rather, I think it's either tautological (within human society, humans socialize best) or unknown *and* unpredictable.
276: That doesn't help you much because most directions of related research are moving away from the idea of pursuing consciousness at all.
Well, for the sake of argument I'm granting that the smaller, harder nuts will be cracked in time for "consciousness" to become an issue at all. Of course that's not a foregone conclusion.
277: What I suspect is more likely is that emphasis on information management will trump specialization on particular types of information.
278: I think "within human society, humans socialize best" is an empirically observable fact, not an abstract tautology. A survey of how non-humans tend to fare when encountering human society demonstrates this pretty amply.
Well, for the sake of argument I'm granting that the smaller, harder nuts will be cracked in time for "consciousness" to become an issue at all.
Ok, but to the degree that this is true, I think it's misleading to try and extrapolate from what research in this area is doing to what would be attemped then. After all, the lowest levels of our own vision system are best understood reductively but that doesn't tell you much at all about `human vision'. There is no direct path that way from understanding the signal processing done in the retina to understanding the HVS.
281: It's not observable with the presence of a human level (or higher) conciousness.
But I probably worded that badly, anyway. I see no reason to, as you have, assume that any constructed intelligence would not have these same properties. For all we know they are necessary. And I don't see the direction of much of current research to be useful to extrapolate from.
284/283 crossed, I think we're understanding each other now.
A survey of how non-humans tend to fare when encountering human society demonstrates this pretty amply.
This is silly, isn't it? What non-humans with human-level intelligence and communication skills were you thinking of?
LB, that was an enormous "if". The instant search of a more-than-human memory is already here (though a lot of effort would have to be put into feedback mechanisms making sure that the AI medical memory in use is accurate and up to date.)
But the "human-class judgment" part -- the "if" -- is more or less undescribeable. There's no consensus as to what good human-class-judgment is. There are a lot of very bright flesh and blood humans whose judgment is horrible because of blind spots and obsessions.
I can see a point arriving when AI can reliably match human-class bad judgment. Reliably matching human-class good judgment would be much harder to do because it would be hard to recognize success, since there isn't a lot of consensus good judgment. Though I suppose that once you've reached the "average" or "not bad" level, you've succeeded.)
If I understand correctly, one intended effect of the updated evidence-based database is to take certain kinds of medical questions out of the area of judgment and make them automatic.
I have a friend who works in AI, and he has a robot that is essentially a giant proto-eyeball. And the point of the research is to get the robot to see, in the track-an-object sense. This has turned out to be very, very hard, even before we consider whether the robot knows that it's seeing.
Thus I conclude that we will not have conscious robots within the next 50 years.
we will not have conscious robots within the next 50 years.
You mean I won't be getting my Adrienne Barbeaubot? That sucks.
288: There are some good reasons to consider vision `ai complete' (meaning that if you have `real' vision, you have `real' conciousness/ai). It's that hard a problem.
At lower levels though, like what your friend is doing, there is lots of neat stuff on the go. Object tracking and recognition have made some real leaps, but not many in the `mimic what humans do' direcitons.
At some point we will get back to the problem of what "consciousness" or "the mind" is. My own opinion would be that to the extent that it performs like a mind, it is one.
More interesting to me is whether AI minds are minds of persons. A lot of AI energy seems to come from very specialized people who don't especially like or understand most other people and are trying to construct artificial minds without the messy traits they don't like in others. In fact, there seems to have been a centuries-long trend toward training flesh-and-blood people to be more properly abstract and mental and rational and scientific and less fleshly and emotional and humorous, which ended up producing scientists whose goal was to produce minds even less human than themselves.
287: Oh, I'm not expecting to get to AI consciousness soon, or really ever, just talking about what it would be like if it did show up. But you're right -- someone/thing with perfect information retrieval doesn't need the same quality of judgment, because the answer to a whole lot more questions becomes unambiguous.
286: What non-humans with human-level intelligence and communication skills were you thinking of?
Far as we can tell, we pretty much wiped them out when we came in contact with them about 28000 years ago.
we will not have conscious robots within the next 50 years.
Mitt Romney. Al Gore. I refute you thus.
Apo, the artificial bimbo is already here for you. Just rinse it out after each use.
293: Okay, but if you're talking about Neanderthals, you have no idea what happened to them 'societally'.
296: We have enough DNA evidence to know that they didn't vanish through interbreeding, which at least minimally tells us they were not socialized with anatomically modern humans. The exact mechanics are otherwise obscure, that's true.
(Although what human populations have subsequently done to each other probably provides some clues.)
290: I'd have to agree; if you get 'vision', you've probably got 'mind.' But it's just really, really, hard. Like, 'the robot eye tracked an object without crashing the computer, let us celebrate and dance with joy' kind of hard.
289: Barbeau is now a Van Zandt of the Springsteen Van Zandt family.
Argh I wish I had time to keep up with this thread. To sum up: soup, Cala: well, sort of. The rest of you goofballs: goofballs.
297: Well, no. It says that they weren't interfertile, presumably for social rather than genetic reasons. That doesn't mean there was no social contact between the two groups, and it hasn't got thing one to do with possible modern-day reactions to machine intelligences.
What non-humans with human-level intelligence and communication skills were you thinking of?
The Thetans are still with us, and their clammy fate was not bitter. Each of us bears a real Thetan within us, and thus virtual Thetans control the internet.
299: Yup, I'm very, very familiar with this part of it.
301: well, we can talk about it some other time if you'd like. I don't actually work in AI, but what I do work in overlaps in places. So while I have a good feel for what's going on there, there are areas of it I don't pay much attention too. I think I've got a fairly good handle on the broad strokes though (but of course, happy to learn new stuff!)
We did joke, however, that his robot was beginning to show traditional sci-fi signs of conscious, viz., his robot inexplicably stops working whenever I stop by the lab, thus displaying the usual troubles robots cause around women.
291 is good.
301: Come back, Sifu! You can't just leave us hanging like that!
302: It says that they weren't interfertile, presumably for social rather than genetic reasons.
By "not socialized with anatomically modern humans" I mean "not integrated into modern human societies." Of course there can be all sorts of "social" contact between otherwise relatively discrete societies. Who knows but that they probably traded, raided, mooned each other, made crude jokes at one anothers' expense and so on. But we're talking about integration of nonhuman with human, which the evidence we have to date suggests happens on terms of subjugation or does not happen at all.
The relevance, of course, is to the question of whether "humans socialize best within human societies." And yes, there is an extremely vast abundance of evidence that this is true, to which the objection that it's a "tautology" is not convincing. Ergo, the idea that nonhuman AI consciousness (and even if designed to replicate the human mind, it would still be in important senses nonhuman) would be socially valuable to humans to the extent of, say, being trusted to doctor them or dispose of their relatives' estates, is dubious.
290, 299, 304: Of course, there's mind and mind. A rat can see, so while there's an argument that if you've solved vision you've solved mind on some level, there could be as yet unguessed at problems between the truly awesomely challenging "We've created a hamster-level machine intelligence" and "We've created a person-level machine intelligence."
(I have no idea what those problems might be, but you see what I mean.)
304: oh, I think you do, I just think you may be giving short shrift a little bit to some of the possible avenues for making inroads into understanding and modelling pieces of consciousness -- probably nothing you don't know about, just might be a matter of relative optimism.
I also think it's sort of misleading to conflate vision and consciousness, or even really to tall about either of those things as single problems. You could definitely achieve a lot of the functionality of vision (see e.g. Face detection) without addressing anything about self-reflective consciousness, and you can address elements of (especially social) consciousness without having the sensory pieces there.
306: The relevance, of course, is to the question of whether "humans socialize best within human societies." And yes, there is an extremely vast abundance of evidence that this is true, to which the objection that it's a "tautology" is not convincing.
Yes, it is convincing. If all you're saying is that nothing that can't pass for human will be absorbed seamlessly into human society, no shit, Sherlock. (And I'm still not getting your 'vast abundance of evidence'. I see one data point (which I hadn't actually known had been established), that Neanderthal DNA hasn't survived in modern humans. Is there another relevant data point?)
But how that gets you to machine intelligences wouldn't be useful or accepted in performing tasks, I don't see at all. They might not be, and I would guess are unlikely to ever exist, but the fact that no one's going to be having sex with their artificially intelligent doctor-system doesn't mean they won't ask it to diagnose their rashes, if it's better and cheaper than a human doctor.
"tall", "Face": this is what happens when I try to be substantive typing on my phone.
308: I think we're on the same page. I was trying to be a bit careful by using scare quotes: `real' vision being something that does everything we think of as vision... but individual functionality is a very different story. It's difficult to be broad and precise and also not blather on at lengths annoying to the unfoggedatariat.
We're not actually very good at face detection yet, but we're getting pretty far on constrained versions of it... very much falling under my latter `progress made in directions not mimicing the way humans do it'.
I suspect relative optimism does come into play. There are lots of interesting avenues, I agree. The ones that have made the most measurable progress by far are the more reductionist, less `AI' areas. But this is probably as much because they are attacking simpler problems and have more to leverage.
309: Is there another relevant data point?
The experience of the entire remainder of the plant and animal kingdoms in encountering and interacting with human societies, yes. (Of course, the idea that this is relevant to any discussion of "consciousness" is itself not widely accepted, since the idea that animals have anything worth describing as mind at all is a relative novelty in most modern cultural contexts. Which itself is an interesting sort of a data point.)
But how that gets you to machine intelligences wouldn't be useful or accepted in performing tasks, I don't see at all.
Well, let's unpack. If something that can't pass for human will not be absorbed "seamlessly" into human society, as you seem willing to stipulate (I think it's understatement but will accept it for the sake of argument), what does that mean concretely? On the level of day-to-day social interactions, which is how the actual influence of any form of intelligence is largely decided?
I think that the evidence about the Neanderthals isn't quite as unambiguous as DS thinks. IIRC the question is still in the air.
305: Perhaps if they picked that up and ran with it, they could devise a horny visual system able to recognize T&A in a tenth of a second as far away as 60 feet or so. Maybe with a little virtual erection and masturbation module.
Once they had the basic (male) humanity built in, they could start adding on other forms of visual perception by treating visual space as a collection of transformations of T&A.
312: The experience of the entire remainder of the plant and animal kingdoms in encountering and interacting with human societies, yes.
I would have thought it was uncontroversial that plants and animals generally differ strongly from humans in their mental capacities. Aren't we talking about machine intelligences that would more closely approximate human intelligence?
On the level of day-to-day social interactions, which is how the actual influence of any form of intelligence is largely decided?
Damned if I know, but damned if you do either. If some form of human-level machine consciousness is developed, unlikely as it seems, we'll see what influence it has when it gets here.
On the example I've mostly been kicking around, medicine, I can't think offhand why AIs wouldn't be accepted. I don't socialize with my doctor now: I go in, say "[X] hurts, or looks funny" and they grunt and write a script, which generally makes me feel better. I'd be fine if the grunt were replaced with a beep, so long as I felt better at the end of it.
John, your 291 reminded me of a bit I once read about the history of computer animation. At some point animators started seriously asking themselves why everything they rendered came out looking plastic. It turns out that plastic reflects light with white highlights, while most substances reflect it with highlights in their respective colors. It's just that the animators worked in environments where pretty much everything is made of plastic. Took a while to nail that one down. I sometimes wonder what equivalent oversights there might be in early synthetic consciousness.
My own guess for views I hold most likely to seem ridiculous in 50-100 years is my views on what constitutes humanity in various senses, including the moral - who has what kind of claim on me (and everyone else) because of their humanity. I think it quite possible that improved tools for neurological analysis and communication assistance will lead to a general recognition of the human-level sentience of at least one other species, leading to a (lumpy and uneven) shift in accepted usages of "humanity" beyond homo sapiens. But this won't be the utopia of animal rights activists. An expanded sense of what's "human" overall will go with some more categories of "humans who have claims on us" within it that may well end up constricting some rights I think of as universal. It'll be weird and, to use a word repeatedly, lumpy.
At least one major philosophical category that's important to me - feminism? socialism? - won't exist at all as a category in a couple generations, and students of that era will be baffled by the strange connections we draw between what are to them obviously very different things, much as most of us have to do now with the progressive movement of the early 20th century and the role of prohibition and eugenics, and so on.
Erudite trolling will be alive and well, and so will cock jokes. Better understanding of biology and the application of weak nanotechnology to the manufacture of tailored drugs will enable a much wider range of cock jokes.
My understanding re: DNA evidence from Neanderthals it seems unlikely that they interbred with us. But there is also a bit of genetic evidence out there suggesting otherwise.
There are arguments that a human-level consciousness would have to have desires (Antonio Damasio in several books), would have to have a body (Francisco Varela and other in "The Embodied Mind") and would have to be acculturated and participate as a person in interactions with other people (no specific citation).
For example, human sensation isn't just information processing, it is learned and developed through physical interactions of embodied humans manipulating physical reality, moving about in it, and learning to recognize the way each part of reality impacts the organism in terms of its needs.
However, a lot of AI seems to aim for non-human consciousnesses purged of human weakness.
317 has mostly beat me to the punch (with more specific examples) in what I would have said to 314.2. And sure, the differences in capacity are part of the story, but an equally significant part of the story is reactions to difference. For that matter, the slowness with which humans are often wont to regard other human populations as being fully human are another data point.
Of course the outcomes of all this are speculative. I'm just taking exception to the idea that we have no basis at all from which to speculate; I think we have some pretty specific clues from which to speculate.
I'm not sure the medical profession is the example you're looking for. The "doctor as pill dispenser" mode of interaction is a historical anomaly that has generated serious problems of its own (such as antibiotic-resistant microorganisms, for instance), and the slowness of the medical profession in transitioning away from it has generated a multibillion dollar market in alternative and "holistic" medicine. There's a reason why some female patients tend to prefer female doctors, and it's not because the social aspect of medicine is insignificant.
As to why a lot of AI seems to aim for machines purged of human weakness, the most obvious answer is because computers are most useful to human societies as tools that don't emulate human "weaknesses," or the quirkier and less predictable facets of human consciousness. The closer machines get to reproducing human consciousness, the likelier it is that they'll start reproducing those quirks, which makes them less useful in the role of tool, as we talked about a bit upthread.
This is unfair of me, because I dropped the discussion last night to go out and get a drink in order to scare-quote celebrate close scare-quote Valentine's Day. But, "If that's a straight question and it means "Can statements of ethical judgment be assigned truth-values," then no, not in the same sense in which nonmoral statements can (in theory) be."
Later, "Students had a much easier time believing there was a past in which consumer capitalism was common, brutal and wrong and people fought against it than they had believing that there was a past in which consumer capitalism was common, brutal and wrong and people fought for it."
What's the status of your claim that consumer capitalism was wrong? I'd want to say that it's true (or false if the evidence points that way). I know you don't want to say it's true, but then I have no idea what you do want to say about it. It accurately expresses your emotions towards consumer capitalism?
For that matter, the slowness with which humans are often wont to regard other human populations as being fully human are another data point.
But a data point for what? I'm really not clear what you're arguing here. Way above, it looked as if you were arguing that human-intelligence-level AIs (if they existed) wouldn't be any use -- a sufficiently computer-aided person could do anything an AI could do. And that struck me as a really unlikely thing to say confidently, for the reasons I gave above.
Now you seem to be arguing that socially integrating AIs would be difficult, or people wouldn't like them, or something like that (I'm being vague not to dismiss your argument, but to indicate that I'm not clear on exactly what you're driving at), and pointing to inter-ethnic racism and other conflicts as support. And I'm sure you're right that there would be huge social problems around AIs, if they were ever developed. But I can't see that meaning they wouldn't be used or useful, which I at first thought you were arguing.
I don't know if I disagree with you or about what -- I'm arguing mostly because you're sounding very certain about very speculative stuff.
There have to be more categories than "facts" and "emotions". An emotion is by definition private, unshared, and does not entail obligations on others. An ethical judgment may have both an emotional and a factual component, but it's not a fact and it's not an emotion, because by definition it entails an obligation.
The positivists asserted that everything that was not a fact was an emotion, and emotions distort the perception of reality. They were rather like nihilists that way, and both of them are a sort of apotheosis of a kind of value-free scientific point of view (i.e., the belief that values are bad.)
Sounding more certain than I am is my speciality. If it helps, feel free to preface all my posts with a disclaimer saying "it is my admittedly imperfect and half-assed speculation that..."
My contention is that:
1) The usefulness of tools is a function of their social utility, not some absoluate criterion of efficiency;
2) Computers have amply demonstrated their social usefulness as tools;
3) Owing to the complications that come with being able to engage in interpretation, metaphorical extension etc., computers with something like consciousness would be inherently less useful as tools, and competing for a social niche that existing humans already occupy.
that existing humans already occupy.
I should add, "and could conceivably continue to occupy effectively enough that having computers do the work wouldn't be much of an advantage, in comparison with the social obstacles."
I'm sure an actual ethicist is going to have something to say here, but there are non-cognitivist metaethical views that aren't simple emotivism and varieties of cognitivism that aren't some simple form of moral realism.
Yeah, and I can't buy into 3 at all. There are a lot of tasks where the combination of human-class judgment and brute-force information retrieval and processing power would seem to me to be very useful, not strongly dependent on the sort of social interaction that you're seeing as inherently problematic. Again, my judgment with perfect recall of all written law would make a kick-ass lawyer, and that's a very social task. When you get into geekier stuff -- any kind of technical design, say -- I'd think an AI of the sort we're thinking of would be so useful that people would find ways to work around the social problems.
Well, 265 to 325. Something of an impasse at this point.
Discussions of consciousness with no real consensus of why that actually means, Emerson recapitulating Hillary Putnam, LB taking the side of our robot overlords; has the world gone mad?
I'm sure an actual ethicist is going to have something to say here, but there are non-cognitivist metaethical views that aren't simple emotivism and varieties of cognitivism that aren't some simple form of moral realism.
I used to know about this, where "this" is different ways of classifying ethical views and "know" means took one class and did most of the reading, but I've forgotten a lot of it. I did intend my question about truth aptitude (I almost wrote this as "aptness") to get at cognitive/non-cognitive distinction.
325: I think that "human-class judgment" is the doubtful term in what you're saying. AI processes the data in its enormous memory -- it doesn't just retrieve information. A lot of sophisticated technical processes are completely automated and do replace human-class judgment, but this is because the processes are better understood now. (Metallurgy is an example -- before materials science, it was an empirical art).
In other words, AI can replace human class judgment when some area is moved out of the art/craft/skill area into the area of fully-understood processes. But that's different than having human-class judgment.
re: 328
I've actually taught it, but I have masqueraded as an 'ethicist' for the purposes of teaching the occasional undergraduate revision class. If/when filling out job applications I wouldn't list it as an 'area of competence'.
There are people who post here who genuinely know stuff about it.
329: That's an argument that we're not going to get strong AI (which I think, ignorantly, is pretty likely to be correct). All I've been arguing about with Slack is if we did develop strong AI (yeah, Tweety, not rigorously defined or anything. Some machine which you could interact with in a manner generally appearing to be conscious, able to perform those cognitive tasks which now only people can perform. That's still not rigorous, but you know what I'm handwaving at.) whether it would be any use.
"Actual ethicist" = "metaethicist". People who talk about ethical questions per se are amateurs and cranks. Ethical questions are sometimes useful for testing metaethical theories, but so are trolleycar problems.
John, I realize you're trolling, but "actual ethicist = metaethicist" is just wrong, even by the academy's definitions.
331: that is extremely vague, though. You mean all tasks a human can perform? Interacting in a human-like way in all circumstances? It's very likely we'll keep chipping away at the edges, as we have been, perhaps indefinitely. Indeed, it may be that machine consciousness could be utterly dissimilar to our own, as implemented.
You mean all tasks a human can perform? Interacting in a human-like way in all circumstances?
And I haven't got the knowledge to get much more specific. But handwavingly, what I know about AI research is that it's divided into stuff that's not even a little consciousness-like, like expert systems and so forth (really really useful and important, but not much related to how biological minds function), or that's really really really low-level perception. Something with what would be perceived as initiative and judgment, also capable of high-level cognitive functioning, even if it wasn't much like a human to interact with, is what I'm thinking of.
But that's not much less vague, and it's informed by a combination of a lifetime of SF reading and pop-science articles, not anything more substantive.
People keep telling me about the great stuff in philosophical ethics, but I don't see philosophical ethics playing much of a role in real-world ethical discussions. I believe that I've even seen it asked "Why should it?" Because philosophical ethics is too specialized and technical for real-world people.
Bob Somerby, who studied philosophy many years ago, asked why public political discourse today is so wretched when we have so many wonderful philosophers. He mentioned Nozick and Rawls specifically, who are really at the relatively engaged end of political philosophy, but really haven't contributed much of anything real. (Nozick more than Rawls, but his influence has been pretty negative. Singer strikes me as a bad example too, since he provides sophisticated arguments for the tiny audience of philosophical animal-rightsers. The energy of animal rights comes from elsewhere.)
John, your 291 reminded me of a bit I once read about the history of computer animation. At some point animators started seriously asking themselves why everything they rendered came out looking plastic. It turns out that plastic reflects light with white highlights, while most substances reflect it with highlights in their respective colors. It's just that the animators worked in environments where pretty much everything is made of plastic. Took a while to nail that one down. I sometimes wonder what equivalent oversights there might be in early synthetic consciousness.
By your use of `rendered' I assume you mean computer animation. In which case this is a mostly just-so story. Or at least, while some animators may have asked the question (if they didn't understand the underlying tools too deeply) it was really never an `oversight'.
The understanding of physical optics has been far ahead of computer graphics since before computers existed. The problem isn't the models, it's the computation.
It turns out that fairly crude approximations to the `correct' models (e.g. Phong shading) are much *better* approximations to what goes on with plastics, because plastics interaction with light is fairly simple. The problem is the mathematics may be reasonably easy to write down but a) real life interactions aren't simple and b) even the simple interactions can be very expensive to compute. But specular highlights and things like the Fresnel effect have been understood well before anyone asked a computer to try them.
The history of computer graphics consists mostly of a mix of slowly increasing the accuracy of approximation for the relevant integrals/stochastics (e.g. global illumination) and hacks that are fast & wrong but look pretty good (e.g. games).
Indeed, it may be that machine consciousness could be utterly dissimilar to our own, as implemented.
Right, which is why extrapolation from current situation is a bit of a mugs game.
w/d up there at 319: What's the status of your claim that consumer capitalism was wrong? I'd want to say that it's true (or false if the evidence points that way). I know you don't want to say it's true, but then I have no idea what you do want to say about it. It accurately expresses your emotions towards consumer capitalism?
Basically what 321 and 324 say.
328: I did intend my question about truth aptitude (I almost wrote this as "aptness") to get at cognitive/non-cognitive distinction.
Still don't know quite what you mean by truth aptitude, and as already noted, there's a hell of lot of metaethical water under the bridge already about this stuff. The easiest answer, given that this thread is dead anyway, is that you can, if talking about moral 'truth' seems necessary (and it often does so seem), talk about moral *reasoning*, noting that it differs in important ways from nonmoral reasoning, noting that what counts as evidence for moral claims is rather different from that for nonmoral claims, and thereby noting that applying the notion of truth-status to the former is a forced fit that necessarily distorts the ways in which moral claims work.
There are a bunch of red herrings to be avoided along the way, not least of which is the prescriptive (moral) / descriptive (nonmoral) divide. "It's raining," e.g. can be used as a prescriptive statement.
Speaking again very (very) crudely, the kind of question you ask insists on an epistemological reading of ethics ('What sort of knowledge is moral knowledge?' 'What sorts of facts make moral statements true?'); whereas I'd read it in terms of philosophy of language*: what are we doing when we engage in moral language-games? I'm a Wittgensteinian, baby. In the beginning was the deed. The question is not how we map ourselves onto the world as receptors, but how we participate in it as agents.
* The professional philosophers will cringe at that distinction.
I agree with LizardBreath. Assuming we do get AI as good as humans and the cost of this AI is low enough, such AI will be massively used.
My hunch is getting AI to human level intelligence will be very hard, but moving AI from human level intelligence to greater than human level intelligence won't be that difficult. Especially, since humans won't necessarily be the ones doing the work.
The question is not how we map ourselves onto the world as receptors, but how we participate in it as agents.
This sounds like a phenomenological understanding of ethics, not necessarily one that focuses on language specifically.
341: Dude, you're using big words. You're messing up the categories. Don't you know that analytic philosophers hate phenomenology?
Mega-late, but several of points on the next 50 years scenarios:
1) I think the role of genetic engineering and other "direct" manipulations of human beings have not been given their proportional due in this thread.
2) I agree that it will continue to be human-computer symbiosis that will get most of the traction in this period. "Judgment" will increasingly get merged right into the search, so you will have the option getting partially analyzed results.
3) There will be a lot of attention to advancing the the human-computer scenario given the commercial demand right now for improving on today's absurd Blackberryish gyrations (perspective from a man with big fingers). For input there aren't that many channels for fine motor control past the fingers, eyeball, tongue, sort of toes; for output miniature heads-up displays and embedded earphones will go a long way toward satisfying demand for years. The latter may impede development of more direct neural inputs.
from a man with big fingers
"You know what they say about men with big fingers."
"No, what do they say about men with big fingers?"
"Their emails have a lot of typos."
344: As do their Unfogged posts.
"You know what honey? Maybe you could just leave your pants on tonight."