I wonder whether it's going to have an effect on law firm staffing.
That's just cruel, LB.
I could actually imagine it being easier to make a Jeopardy-answerer than a general question-answerer. Most of the answers are going to be trivia items - people, places, etc. - which allows for a lot of narrowing down. Not that it's not an achievement, but I wonder how applicable it will be to more open-ended questions like "What cases developed jurisprudence in maritime wages?"
Eh, I mean, it's very cool, and a good indication of the (impressive!) state of machine learning research, but I would caution you against thinking this implementation specifically is going to generalize. For all that they contain worldplay and so on Jeopardy questions are actually fairly regularized in their form, and it's working with the whole corpus of "answers" from the show's history, if I remember right. Also, like Deep Blue, I have a feeling their approach (throw a fuck-ton of money at building an enormous supercomputer, and then apply a kitchen sink approach to each query) is going to be end up being too inefficient to be useful for real-text searches; anybody can throw a weighted combination of every search method at a problem, but the more you do that, the more tuning is required, and the less well it generalizes.
If you want to look for Skynet within IBM, I'd say the Blue Brain project is a somewhat more interesting place to look.
All those caveats aside, yeah, neat! It'll probably win at Jeopardy.
Having now skimmed most of the article, I'm kind of disappointed - I was expecting a conceptual breakthrough from the intro and, as Sifu says, it's just throwing more and more computer power at a hodgepodge of statistical techniques. I suspect we're going to need better theories of consciousness, and how the brain represents meaning, before we can get satisfying improvements in tasks like this.
Could someone explain why it's so hard, and why a supercomputer is needed? I had the opposite reaction to LB.
Another thing about Jeopardy questions, thinking about it: I've noticed that one of the reasons guessing works so well with Jeopardy questions is that the questions involving some obscure subject matter often have as their answers (reverse "answer" and "question" there if you're pedantic) the most famous concept or personage within that subject area. So if there's a question about (say) an Assyrian king, the answer's probably going to be "Sargon".
6: Yeah, a bit like how crosswords are full of "Etna" and "iota" and "eke."
I don't mean to be dismissive, exactly. Like I said, it is very impressive that we're this good at document classification. But I think the missed question in the article are illustrative: when it fucks up, it fucks up both badly and incomprehensibly.
In terms of grand ML projects I'd say the Netflix prize might be somewhat more impressive: improving the rates at which you can predict what movies somebody likes, based merely on what they've seen before? That's very hard. I certainly can't do that with any kind of consistency.
Another point to note: if you'll notice, the scientists (well, if you count Wolfram -- maybe "scientist" is more accurate) quoted in the article are sort of minimizing expectations, and the people spinning grand dreams of medical and legal question-answering services are either trying to sell million dollar servers, the software that runs on them, or the consulting time to hook them up.
I find it charming that IBM is still trying to sell supercomputers.
11: you noticed my note, I notice. Noted!
improving the rates at which you can predict what movies somebody likes, based merely on what they've seen before?
Do we know if they liked what they saw? If they're terrible at choosing movies they'll like, it would be hard to extrapolate much.
14: I believe in the actual Netflix prize the metric was user-submitted ratings, rather than simply whether they watched (or, since it's netflix, received and then mailed back unwatched) them
Actually, the article's kind of frustrating; it gives enough information that I can guess at several possible things which might be being attempted, but not really enough to tell if they're actually doing something new. At the core, it sounds like they're using good old fashioned latent semantic analysis, which is just taking a big pile of documents and figuring out combinations of words which tend to appear together or not. You can use it to solve SAT analogy problems about as well as actual SAT takers, but that doesn't seem like a "to the bunkers!" moment either.
15: that's correct, the prize was for predicting ratings.
I don't think Sargon counts as Assyrian, does he? Too early.
12: It seems like IBM is mostly a consultancy these days, with a sideline in mad science.
Funny how the article doesn't mention that Wolfram|Alpha is a machine only capable of answering "I don't know what to do with that input" to any question that isn't precisely tailored to what it knows.
20: I use Wolfram|Alpha semi-regularly for simple technical queries like "what is the refractive index of hydrogen gas." Anything more complicated seems to confuse it.
"I could actually imagine it being easier to make a Jeopardy-answerer than a general question-answerer"
Oh, definitely. I mean, I imagine Google could knock up a rough and ready version in an hour or two. It's a much simpler task for an algorithm to identify an item from a list of known characteristics than it is to generate (on a generalised basis rather than within a given format) the most pertinent characteristics for a known item. Now, Watson seems to be a great deal more sophisticated than that, but the point is that Jeopardy style questions are much easier for computers than conventional questions - witness the difficulties Wolfram Alpha often encounters with fairly simple queries.
Also, the article unintentionally highlights one of the many glorious ironies of the internet.
"Type that clue into Google, and you'll get first-page referrals to "elementary, my dear watson" but none to deerstalker hats"
This statement is no longer true, thanks to its very publication.
Wolfram|Alpha seems roughly equivalent to that Pocket Ref book, but with more graphs. Which is cool, but like so many things Stephen Wolfram does, not quite up to its billing.
It is hard to live up to your billing sometimes.
22: I had that same thought as I read it.
7: Etna is a constant lately. On the order of Omoo.
Ione Skye is the cutest of all the frequent crossword answers.
22, 25: n-gram googlewhacking for n > 3: the next great artificial intelligence challenge.
26: Etna is a constant lately
Yes, most of the recent eruptions have been small ones on the flank.
Whatever happened to that thing where IBM was trying to get into wearable computing? I want my contact-displays! Maybe I'll just have to wait for apple to get there, and then put up with them gluing it to my eye and only letting me get programs from the App Store.
26: "Lately"? I'm just glad "esne" and to a lesser extent "eft" are deprecated.
Huh. While the article seemed pretty clear that they weren't doing anything wildly new, I was still very impressed because I was thinking of Jeopardy questions as not significantly different from natural language questions generally.
I was thinking of Jeopardy questions as not significantly different from natural language questions generally.
The guy at the deli keeps getting angry when I say, "What is one pound of Swiss cheese?"
I beat the IBM machine despite totally blanking on such obscure words as "saddlebag" and "footlocker".
It got 4 out of 5 of the corporate conglomerates but had no idea on the "before and after" questions.
26 Across: Hurler Hershiser famed among crossword solvers.
a nearly horizontal entrance to an underground mine, and a crossword favorite.
35: I liked that rather than "ditty bag" its preferred answer was "Papa's Got a Brand New Bag".
32: Not a fan of the late Eugene T. Maleska?
Obligatory reminder that the British Ministry of Defence built Skynet decades ago; it's now in operation but, as far as I know, has nothing to do with the UK's nuclear weapons or its small fleet of killer robots.
33: it is impressive! It just isn't necessarily particularly generalizable, not least because the questions are explicitly constructed to have a single, unequivocally correct answer. A question like "is this document materially concerned with shipping regulations?" or "is this paper relevant to the treatment of persistent skin rashes in elderly patients?" or whatever is very different.
32: You know, my undergraduate major was Medieval Studies. While the classes I took made no coherent sense at all, I did take a bunch of medieval history classes. Never saw the word 'esne' other than in a crossword. I think some desperate puzzle designer made it up back in 1937, and other puzzle designers have been copying it since then.
40: Even worse, what's really wanted is "Here is the client's proposed course of action. [[Paragraph or seventy in natural language follows.]] Is it compliant with all applicable shipping regulations?"
41: It appears to have only been in anything like common use in Old English, and then in some 19C retro works (Ivanhoe). Old English is effectively a different language; I don't see how that puzzle-designer lived with himself.
42: What would be almost as useful would be a reliable answer to "What regulations apply?" Although, now that I looked at the interactive thing Ned pointed out, and got a look at the type of wrong answers Watson was coming up with, I don't think it's all that close to being able to do that kind of work.
44: If that ever could be automated, my guess is that regulations would increase in number and complexity to the point that nobody would be able to understand the regulations without the software. I'm growing more and more convinced that society needs more social and political movements that involve torch-bearing crowds who smash machines.
Answering the unambiguous trivia questions aren't very impressive, but some other things are. From the NYT article:
Over the rest of the day, Watson went on a tear, winning four of six games. It displayed remarkable facility with ... sophisticated wordplay ("Classic candy bar that's a female Supreme Court justice" -- "What is Baby Ruth Ginsburg?").
During one game, a category was "All-Eddie Before & After," indicating that the clue would hint at two different things that need to be blended together, one of which included the name "Eddie." The $2,000 clue was "A 'Green Acres' star goes existential (& French) as the author of 'The Fall.' " Watson nailed it perfectly: "Who is Eddie Albert Camus?"
Obligatory reminder that the British Ministry of Defence built Skynet decades ago; it's now in operation but, as far as I know, has nothing to do with the UK's nuclear weapons or its small fleet of killer robots
Correct. It's a satellite communications network.
48: What is the sincerest form of flattery?
46: That's the sort of thing that had me wildly impressed. Looking at the interactive feature in the article, I'm less so -- the wrong answers on the wordplay questions make it look as if it's not 'understanding' the wordplay in any meaningful sense (like, the wrong answers aren't well-formed in terms of the wordplay).
46: I'm not sure those are any harder than general Jeopardy questions (they might even be easier). It's a specially labelled category where you know you have to find two unrelated 2-word answers that overlap, right?
46: yeah, I imagine there's some hardcoded trickery for the "Before & After" categories; it doesn't seem like it would be too hard to slice the question in two and try to splice the top options together in various ways. In general, I suspect they have specific algorithms to deal with each of the common wordplay categories (before and after, starts with a letter, ends with a suffix, etc.). That certainly seems like the easiest way to do it, especially since the nature of the wordplay is signaled in the category name.
The pwner and pwnee category is more difficult.
If that ever could be automated, my guess is that regulations would increase in number and complexity to the point that nobody would be able to understand the regulations without the software.
I thnik that this has basically happened with US taxes. Is there any movement to standardized clauses in law? (this paragraph defines escrow obligations for the bank, these paragraphs define duty, breach, causation, and damage...).
54: And that is exactly what I was thinking of.
I'm constantly frustrated by how Nosflow|Alpha is unable even to answer efficiently such questions as "What grammar-little-bitchery might be applicable in responding to this blog comment?".
31: The iEye will be announced by Steve Jobs in 2020. It will be banned in 2021 due to people watching 3D porn while driving.
57: Or mass transit will finally flourish beyond the wildest dreams of environmentalists.
54: I think that does happen occasionally, as with the Uniform Commercial Code.
Could someone explain why it's so hard?
The short answer is that every human being has experiences that inform their understanding, and computers don't have experiences at all. Every human being is the outcome of a couple billion years of adaptive evolution, has instincts and and senses and emotions, desires and fears. Computers lack all these, and so lack the essemtial background common to all human understanding of natural language.
One of the ways this difference manifests is in dealing with ambiguity -- natural language is full of ambiguous constructs, and humans apparently resolve them on the basis of that shared background and past experiential learning.
We don't know how to build hardware that has the kind of fine-grained massive and multileveled parallelism seen in the human brain. We don't even know how the brain is organized and connected except in the most general terms. We are only beginning to understand the rudiments of how the brain processes language. So direct modelling of the biological processes of cognition is right out.
For a literary treatment of one approach to natural language "Artificial Intelligence" (the connectionist approach pursued by, among others, Doug Lenat at U Illinois Champaign-Urbana), see Richard Powers' Galatea 2.0
46: Those particular examples should actually be comparatively easy to achieve using the standard sort of word-correlation techniques I was talking about.
54: I think the morass that is the tax code is that way in part by design. Complexity makes cheating easier, makes it easier to play "oops, I misunderstood" when caught cheating, and generates huge amounts of money for tax preparation firms.
62: And, on the other side, makes it easier to get campaign cash for adding deductions and the like. A certain amount of complexity is necessary as the economy is complicated, but past that, increasing complexity just helps the elites defraud the people one way or another.
I'm going to print out 60 and post it above my desk.
63: I'm not sure the tax code needs to reflect the complexity of the economy. Only if you want to tax different things at different rates do you run into real problems. Clearly some people think that's desirable (quite apart from the graft angle), but I think the price paid in tax code obfuscation is not worth the payoff. If the government wants to support some activity or other, let them cut a check and make it an explicit subsidy rather than using the tax code to hide the shifting of the burden onto other taxpayers.
Uniform Commercial Code.
Getting US states to cooperate is a pretty low bar to have cleared. Do even the Bahamas use the UCC? Any other place at all?
I was thinking of stuff like this, except functional. Ideally, a set of structured documents that could be used by legislators in individual countries to at least define terms, potentially also rules.
Taxes are complicated because an extra clause in the tax code is worth hundreds of thousands of dollars in campaign donations. Until thta changes, the tax code will not simplify.
Boy, I hated Galatea 2.0.
I don't love 60 either, but I'm not sure I have it in me to explain exactly why. Maybe it's because it's actually University of Illinois Urbana-Champaign?
Nah, that's not it.
Maybe it's because it's sort of glib in its lyricism, and discounts the fact that (1) you don't have to understand something completely to model it -- you model it in order to understand it and (2) everything in there applies almost as well to modeling the behavior of cockroaches, and nobody ever says that the rich behavorial world of a cocktail can never be recreated inside a sterile, dead computer.
I'm going to print out 60 and post it above my desk.
60 is welcome and true, but I hope we don't need to be reminded of it on a daily basis! I mean, we know that, right?
The tax code is a thing of beauty, subtlety, and fine complexity. I sometimes envy tax lawyers. Note that much of the complexity, however, comes down to problems in figuring out what is and what is not "income."
And to 54 and 66, most contracts and other legal documents are more or less standardized, very excessively in my view (that is, even very good lawyers on very big transactions rely mindlessly on precedent and form language, to their detriment and my gain).
the rich behavorial world of a cocktail can never be recreated inside a sterile, dead computer.
It depends on the computer's cooling system, really.
66: I think some of that goes on in business regulation, though in a purely nonbinding way, like the International Conference on Harmonization of Technical Requirements for Registration of Pharmaceuticals for Human Use (which is very big-business/regulator cozy, based on its membership). There are also some treaty obligations to pass certain laws, although that's harmonizing substance, not detail.
He who devises a computer that can experience pain and boredom will have made a truly impressive advance.
But, you know, ascientific or not the rough contours of 60 are true. Humans brains did evolve over millions of years (billions, I guess, if you're starting at abiogenesis). People do indeed have instincts, senses, emotions, desires, and fears. When you get into arguing that computers lack those things, you'll get into trouble, probably (computers pretty obviously do have senses, "desires" are pretty easy to model, "fears" seem like they should probably be rolled up into "emotions", and if you accept the somatic marker hypothesis (which you are by no means required to) modeling emotions should be pretty doable as well) but that's okay.
The thing about humans resolving ambiguity by reference to shared background and past experiential learning is probably true enough, although I don't know that it's the whole story (extra-linguistic cues probably play a role, among other things). Experiential learning can obviously be modeled (e.g. like they're doing with the Jeopardy bot), although it's not totally clear how biologically realistic this is at the moment.
It's certainly true that we don't really have the capacity to build a computer that models a human brain per se, but whether that's a deal-breaker is very much up in the air (you might be able to model aspects of natural language processing, you might be able to get good results by integrating more sensory input, who knows. It's a very active area of research), although it's clear that you can definitely model a lot of the things brains can do without building a toy brain per se (which makes sense! You can model a lot of the things e.g. buildings can do without building a whole other building to test on). Direct modeling of biological cognition is of course possible -- look at the Blue Brain project, among other things -- if not on the scale you'd need to do anything practical.
And, in conclusion, I hated that book.
70: wow. That's awesome. I guess I know what I'm thinking about while I should be working today.
Computers are, however, very adept at determining Hottest Media Personalities (Off-Air).
67 : that's what makes horse races. I certainly don't think it's Powers's strongest work, but I liked it -- maybe because I follow the AI scene a little, and knew a little about Lenat's research (and Reid's famous criticism thereof).
Did you read Gain? Or Gold Bug Variations? Or Echo Maker? I thought all of those were better.
Sorry about reversing Urbana and Champaign. Also, I misspelled "essential". I shall strive to improve.
76.1: I have no idea why I disliked it so much. I read it quite a while ago. I have not read anything else by Powers, no.
Also you responded quite amiably to my grumpiness, which was nice of you, and I note.
No matter how well you model fermentation a computer will never know what it means to be drunk.
Is a chinese room more likely to get flushed when it drinks?
Brinebots drink rotgut which is a rough mock-up sort of simulation of liquor. We're not there yet.
77: Having read your stuff at the Institute for lo these many years, I suspect that you might like Gain if you read it.
(I think I remember that you like John McPhee, and Gain has some of that body-of-knowledge info-dump trait so characteristic of McPhee's writing.)
And any time you want to denigrate my own writing by calling it lyrical, you go right ahead.
81 is bad on so many levels.
I think it would be a lot more interesting to build intelligence that wasn't a copy of human intelligence; not least because we'll never agree on whether we've done it.
67: No artificial intelligence will ever create anything to equal the genius of the cockroach/cocktail clause.
84: you know, it probably is, but I thought about whether or not it was, like, totally racist, and given the actual distribution of the ALDH2 gene I argue that... well, who knows.
The last panel reveals that Utahraptor's claim that intelligent machines don't exist yet is not true. T-Rex would like to rebut the claim, but earlier he was warned by the machines that if he makes their existence known they will eliminate him.
So, it's funny because dinosaurs are extinct?
Moby you are cruising for an elimination.
I stopped with the hyphens, mostly.
That particular comic is probably not the best introduction for someone not already familiar with Dinosaur Comics.
This one elicited audible laughter from me.
91: Oh. I've never read that comic before, I don't think.
It is certainly a remarkable way to reuse art.
Only if you want to tax different things at different rates do you run into real problems.
As Halford, Esq. correctly notes, this is completely wrong. Measuring business income is not trivial.
The thing about complex regulations, and American law generally, is that the words are just a starting point for the development of an acceptable range of interpretation. A lot of the really abusive tax shelter crap was a problem of the big CPA firms and the high-end tax bar pissing away decades' worth of credibility about how that should be done.
103: Not trivial, but not exactly rocket surgery, either. It's done routinely and there are generally accepted ways of keeping track of stuff. No need to add complexity by taxing different kinds of income at different rates.
My preference for simplicity is a lost cause anyway, so no point arguing over it.
Is a chinese room more likely to get flushed when it drinks?
I got into an argument with a Chinese colleague about this who seemed rather offended by the idea that people might refer to something as an "Asian flush". He insisted that Westerners are just as likely to get flushed when drinking, and that the flushing has no unpleasant effects, and that it doesn't make people less likely to be alcoholic. Google seems to support me in saying that all of these claims are wrong.
104.1: It's done routinely by well-paid accountants. Those "generally accepted ways of keeping track of stuff" are known as GAAP, and the results they produce are as debatable as tax accounting. I agree with you that we'd be well-served by doing away with special rates for capital gains, but it's not like the Code or Regs were a whole lot shorter during the period when that was how it worked.
105: I'm Irish and Italian by ancestry, but I flush very easily after one or two drinks. I also get flush from embarrassment/nerves more easily that most people I know. I don't play poker for anything but low stakes and I'd make a shitty shoplifter.
106: True. I don't want to suggest that corporate accounting is a doddle. I've done accounts for small companies and it sucked even with straightforward wholesale dry goods and no weird assets.
I don't play poker for anything but low stakes and I'd make a shitty shoplifter.
I guess you better hold on to your job then.
I never associated the East Asians with being "flushed" while drinking. More like being incredibly drunk and sick to their stomach while drinking.
105, 110: It's definitely a real thing, but I couldn't cite percentages.
112.2: I have several friends who have been known to look pretty much like that.
OT: Yggles wins the award for best blog post heading in recent memory.
I remember reading from Saiselgy that there was some rightwing outfit (Club for Growth?) that lobbies under the radar against any tax code simplification, just to make the process frustrating for people and thus encourage anti-tax sentiment. Or maybe that's just a convenient side effect of fighting to preserve all the loopholes? Either way, if there's anything to this, I'd hope it could gain a bit more widespread awareness. I've never heard it mentioned outside the aforementioned blog.
On preview: concomitant mentions of Yggles purely coincidental.
113.2: The URL for that entry and its headline make a curious pair.
Oh yeah. The Asian flush was well known, and fair game for teasing amongst my friends.
110 -- Seriously, how is it possible that someone who was in college in the past 30 years -- in the sciences, no less -- has never heard of the Asian flush?
114: IIRC they've also lobbied against withholding, on the theory that if everyone had to write a big fat check to the government taxes would be even less popular. I think the so-called "Fair Tax" is in part motivated by a desire to remind people about taxes every time they make a purchase.
Speaking of Germans and flushing (not that anybody else is connecting them), I think I've mentioned how creepy I find German toilets. You know there are problems in a society when, "I want a good view of my poo before it goes to the sewer" is something people say to their plumbing fixture designers.
110 -- Seriously, how is it possible that someone who was in college in the past 30 years -- in the sciences, no less -- has never heard of the Asian flush?
How it possible that someone from China hasn't? I'm puzzled. Though he also likes to joke about how the Japanese think sake is as strong as wine when really it has the same alcohol content as American beer, which is also false, so I guess this person just isn't to be trusted on matters alcohol.
Oh hey, Btock-style (not like that!) question: yesterday I bought some prepackaged hummus and put it in my hotel refrigerator. Today I discover that the hotel refrigerator was unplugged the whole time. How bad is hummus likely to get in ~20 hours, at room temperature?
Is it open, essear? I'm guessing it's fine either way, since there's not much to spoil, especially if it has oil and lemon juice in it.
120: Possibly you encountered the Chinese version of a bro?
121: Try it and report back! (I'd eat it if it smelled all right.)
Can't you feel not-cold?
I'm not sure how I didn't notice. Opened the refrigerator, tossed it in, didn't pay attention, I guess.
It's not open. But no lemon juice in it. Says "Keep Refrigerated" on the label. Nothing obviously amiss, though. I guess I'll eat it.
120: If the one type of sake I have tried is representative of the flavor of the drink, I think I'd rather have unrefrigerated hummus.
Hummus last forever in the fridge, so I bet it would last pretty well not-in the fridge.
The hummus I usually buy has "sell by" dates a week or two ahead. I've never gone past because I like hummus. I'm trying to eat "not cheese" as my pre-sleep snack.
OT: As several of our UK commenters know, I'm coming to London in late July and want to put out meetup feelers. But I also have a dear friend who's spending the whole summer there doing research and would love to possibly get together with some nice people if you're up for it! She's lovely, charming, friendly, etc. Anyhow, if anyone's up to go out with a friend of mine after work or something, let me know and I'll put you in touch!
Huh -- I'm just about to put up a meetup post for the first week in July, when I'll be there. Your friend should show for that one.
132: Ah crap -- she will be visiting her family from 7/1-11. Poop.
Re 121: I've let a lot of comments go by without comment, but I want to say that I believe I've garnered an unfair reputation for eating weird things. I'm actually an unusually conservative eater. If there's not a package, with a sell-by date on it, I'm usually disinclined to eat it, because I don't trust my eyes and nose to tell me what's fresh and safe and what's not. There was the one time with the cottage cheese, which I thought had magically become bleu cheese, but (1) that was only once, (2) it had a package with an expiration date, which had not passed, and (3) John Emerson told me it was okay to eat. And I've learned my lesson--cottage cheese that looks and smells and tastes like bleu cheese isn't good for eating, regardless of what the expiration date might say.
IF ONLY 135 WERE TRUE, I MIGHT STILL BE ALIVE TODAY.
Oh, that too I guess, arguably. But that egg was perfectly fine. Not expired--that's why I wanted to eat it.
So 1.5 incidents, at most.
Oh, and the thing about the moldy bread, I guess.
Does it help if we tell you that it's endearing? At least I find it endearing.
Let's not forget the time your wife, who knows you best of all, left you breast milk to drink.
Does it help if we tell you that it's endearing? At least I find it endearing.
But that's what troubles me. It's not that I mind the comments. They don't bother me. But, the reputation is unearned. It's not the real me. The thing you find endearing is a falsehood.
142: Of whom among us is this not true? I'm not sure what the stereotype of me here is anymore, but it certainly used to be something that was alarmingly dissimilar from how my IRL friends perceived me.
This is the internet, Btock. We can only ever be fond of personas here. You've developed a much-loved persona here.
It isn't like my presentation here is filled with nuance.
I think it's part of you now, Brock.
143: We just believe that you are unaware of this aspect of yourself.
Of whom among us is this not true? I'm not sure what the stereotype of me here is anymore, but it certainly used to be something that was alarmingly dissimilar from how my IRL friends perceived me.
I think the stereotype of you here is someone whose IRL friends are all bizarre maniacs, so that's very likely.
Oh, now that I'm thinking more I guess there was also the hamburger from out of the bottom of the dirty fish tank. But I was intoxicated at the time. (And also I guess the stuffing from the inside of a pillow--same.) Maybe there's something else I've shared in the past but am forgetting now. Fine, I guess the reputation has some basis of support. As long as people understand that, as a general matter, if you proffer inedible foodstuffs, I'm not automatically going to scarf them down.
The friend in 131 is not a bizarre maniac, by the way!
148: Like Heebie said, endearing. A little frightening, but endearing.
if you proffer inedible foodstuffs, I'm not automatically going to scarf them down.
Not without discussing it at some length first.
143 and 144 are among the truest things ever writ herein. Hereon. Upon.
there was also the hamburger from out of the bottom of the dirty fish tank. But I was intoxicated at the time. (And also I guess the stuffing from the inside of a pillow--same.)
Was the hamburger soggy? How did I miss these the first time around?
The hamburger was pulled from the bottom of a fishtank--of course it was soggy.
(I'm not about to wage battle against the hoohole over this. I lose those fights every time I try.)
I once ate a Twinkie that had been buried in a potted plant for a year.
The hamburger was pulled from the bottom of a fishtank--of course it was soggy.
Hey, I was just trying to give you the benefit of the doubt. Maybe the fish tank was empty. But by all means eat a hamburger from the bottom of a tank full of nasty algae water and fish. It's, uh, endearing.
I realize my 64 made more sense inside my head. I meant to communicate that I thought 60 was a useful summary for pushing back against the worst of the techotopia offenders (The human brain is just like a computer, if you break it down into little parts! See?), but even more than that I was responding to this:
natural language is full of ambiguous constructs, and humans apparently resolve them on the basis of that shared background and past experiential learning.
Sifu is of course right that there are other things going on as well. But speaking as someone who spends inordinate amounts of time trying to figure out why miscommunication happens, what leads people toward one of many ambiguous interpretations, how to structure meetings and policy discussions so that people don't end up talking at cross-purposes, etc., I thought it was a wonderful reminder of things to take into context.
It's most apparent in a cross-cultural context, of course, but there are lots of times when you think someone shares a background and experience with you -- or even worse, you don't even stop to think about it -- and then you turn out to be wrong, and something terribly important founders.
I hate avoidable disasters, is what I'm saying, and joel managed to synthesize something useful about one of my hobbyhorses, and I was/am grateful.
She says, hoping that her communication is clear and unambiguous.
I said I was intoxicated. It was the last White Castle burger, and we were all too drunk to drive out and buy more. What was I supposed to do? (Someone threw it in there in an effort to feed the fish, since we were out of fish food, but the fish ignored it.)
Brock, the hamburger was with bun? And was in there long enough for you to determine that the fish were ignoring it? Okay, you seem to have survived.
Carry on.
Yes, with the bun. I'm not sure what else goes into "and everything". I can't say I recall for sure, but I don't think there was any ketchup or mustard, for example. Just your typical burger/pickles/bun. Maybe cheese, who knows.
This is the opposite of the direction I intended 135 to lead.
(The human brain is just like a computer, if you break it down into little parts! See?)
By some definition of "computer", sure it is. How that is meaningful or useful is of course a much bigger question.
And was in there long enough for you to determine that the fish were ignoring it?
We're not talking about a matter of weeks. Less than a few hours. This all happened over the course of a single night.
Did you count the fish before and after that night?
I... can't think about this hamburger. If I think right at it, I may never ingest food again.
Less than a few hours.
And now I'm outright laughing. It's cool -- you apparently didn't mind chowing the thing down, and didn't suffer ill effects beyond any hangover that may have occurred, so it's all good.
Think about the bun, AWB! Sitting in the fish tank for some period "less than a few hours!"
OK Brock, now tell us about the pillow!
Sifu, somehow I suspect you and I don't hang around the same type of techno-cheerleaders. I'm not talking about people making an approximately right, loose analogy.
Let's not forget the bizzare (and scary) loss of weight. I was worried for you there.
Less than a few hours.
Fuck me, in tears at my desk again.
I'm actually an unusually conservative eater.
168.2: RTFA! (I tried to find it, but the hoohole wins again.) Short story: inadvertently ingested LSD (I think?), and I somehow became convinced the stuffing from the pillow was cotton candy, and I ate it all. Mmm, fiber.
But I wouldn't do that in my right mind.
(Incidentally, in looking in the archinves, unsuccessfuelly, I didn't come across this over two-year old statemetn of the fact presented in 135:
I have a strong aversion to any food that does not come in a sealed package with a clear expiration date printed on it.
So, see, as I said, really, other than a few unfortunate aberrations, I'm very conservative with this stuff.)
other than a few unfortunate aberrations, I'm very conservative with this stuff
Your aberrations have made up for in magnitude what they lack in quantity.
Other than a few unfortunate aberrations, I've never killed anyone.
Just your typical burger/pickles/bun. Maybe cheese, who knows.
And, of course, fish poop. Not to put too fine a point on it.
Aside from the fish poop, algae is pretty much the same as lettuce.
So, see, as I said, really, other than a few unfortunate aberrations, I'm very conservative with this stuff.
A strong aversion to eating such wacky things as, say, all fresh produce doesn't really contrast with the aberrations -- it just fits into a grand, delightful crazy-eatin' whole.
Short story: inadvertently ingested LSD (I think?), and I somehow became convinced the stuffing from the pillow was cotton candy, and I ate it all. Mmm, fiber.
This is a story straight out of Go Ask Alice. Fantastic! (It is not much more usual to eat a pillow while tripping than it is to eat one at any other time.)
inadvertently ingested LSD (I think?)
I'm afraid to ask what you thought you were eating.
"desires" are pretty easy to model...modeling emotions should be pretty doable as well)
so instrumentalist. The computer still won't want to do anything. Until we make some kind of new breakthrough computers will remain just very sophisticated tools for their programmers.
Of course, that's not incompatible with computers deskilling lawyers. No reason they should become more humanlike to do that. The striking thing about Deep Blue is how un-human it was, how it didn't replicate a human chess player's modes of thought but instead optimized the strengths of the computer. Watson seems the same way.
Direct modeling of biological cognition is of course possible -- look at the Blue Brain project, among other things -- if not on the scale you'd need to do anything practical.
This doesn't necessarily get you out of the chinese room. I'm sure it would be helpful, though.
It was the last White Castle burger, and we were all too drunk to drive out and buy more.
I actually find it a little more disturbing to eat a White Castle burger than a burger from the bottom of a fishtank. I really don't get White Castle.
It is not much more usual to eat a pillow while tripping than it is to eat one at any other time
very true!
178: well, you say that, but I was actually eating some lettuce just about a week ago, and a caterpillar climbed right out of the bowl. He'd been hiding in the lettuce. Caterpillar poop: better or worse than fish poop? No different, I'd say.
was actually eating some lettuce just about a week ago, and a caterpillar climbed right out of the bowl
The perils of organic produce.
It is not much more usual to eat a pillow while tripping than it is to eat one at any other time
I"m aware of this. It's one major reason for the "(I think?)" in 173. It was during an outdoor music festival, if that adds any explanatory value. Regardless, the whole thing's on video.
It is not much more usual to eat a pillow while tripping than it is to eat one at any other time
I"m aware of this. It's one major reason for the "(I think?)" in 173. It was during an outdoor music festival, if that adds any explanatory value. Regardless, the whole thing's on video.
183: Just think of it as pre-butterfly poop. I mean, everyone likes butterflies.
186: If you put it up on YouTube and give us the link, we'll all send you money. You could buy yourself White Castle hamburgers!
Or, of course, anything else that it seemed like a good idea to eat.
I only like artisanal hand-selected butterfly poop produced by butterflies that are not coerced or caged.
so instrumentalist. The computer still won't want to do anything. Until we make some kind of new breakthrough computers will remain just very sophisticated tools for their programmers.
You say this, and have said it before, but that doesn't make it not silly. You can assert it all you want, but still: silly.
The chinese room is kind of a clever paradox, and makes potentially interesting points about the philosophical grounding of "Strong AI", but it explicitly has fuck-all to do with what is actually possible in terms of simulating language acquisition on a computer. That you keep bringing this position up indicates to me that you completely fail to understand this elemental point, which Seale was entirely clear on thirty years ago.
Also, if you can build a chinese room? You understand how language works. It is not possible to do it otherwise.
183: Through the wonders of techmology, I'm able to wash lettuce and such right in the comfort of my own kitchen.
The striking thing about Deep Blue is how un-human it was, how it didn't replicate a human chess player's modes of thought but instead optimized the strengths of the computer.
And if you were aware any of the developments in chess AI over the past thirteen years, you would know that the current state of the art revolves around much more human-like strategies hinging less on how many moves you can look ahead and more on inference from specific learned sub-configurations on the board.
And just in case you're going to go and read about that and come back and say "but the human programmers are telling the machine some of the sub-configurations to look for!" take a moment to think about how human players learn strategy.
192: In the text-based adventure game that is Brock's life, you only get to use the command "WASH" after helping the appropriate kindly villager who teaches it to you.
Tweety, fess up: you're the Nerd Terminator, here to compile a target list for Skynet.
194:
You are in the kitchen. There is a basket of vegetables.
>> Wash vegetables
You do not know how to do this
>> Google vegetable washing
You are very hungry
> > Put vegetables in sink
The sink is full of frozen hamburgers. You are faint from hunger.
> > Turn on water
The sink water courses over the hamburgers, turning the buns slimy. You notice that the vegetables are covered in insects. You are faint from hunger.
> > Ask internet about eating insects
The internet is encouraging. You are faint from hunger.
> > Eat everything
YOU WIN
Sifu's offended because secretly he is, himself, a robot.
I don't intend to be defending the claims of strong AI, necessarily, by the way. The goal of exactly simulating human consciousness is either impossibly far off, impossible, or merely deeply silly. But to leap from that to "well, natural language processing is pretty much impossible" or "you can't model emotions in a computer!" bespeaks a vast ignorance of what's actually happening in machine learning.
And to say "well, even if you could simulate emotions, they wouldn't be emotions" in an internet thread makes me want to make fun of you, as may be clear by this point.
But the truth is I'm IBM's latest project: a computer that can simulate a friendly, non-judgmental, relentlessly positive blog commenter.
I know we've had this entire conversation previously, but washing fruits and vegetables never seems worthwhile. Especially something like lettuce. What's water going to do? It's not like I could scrub the things with soap.
I do sometimes dip them under the faucet briefly, but my heart's never in it. If the food was so dirty that I genuinely believed it needed to be washed, I probably just wouldn't eat it.
Or, of course, anything else that it seemed like a good idea to eat.
A tin can, perhaps. Maybe Brock has satyr ancestry.
bespeaks a vast ignorance of what's actually happening in machine learning
Well, I definitely have that, but: are computers really being programmed to have emotions (or emotional responses to things)? If so, how does that work?
THAT'S RIGHT, BUDDY. NO SENSE AT ALL IN WASHING LETTUCE. DON'T LISTEN TO THE HATERS.
201: Shearer, apparently, was version 1.0.
197: To get the good ending to the game you have to figure out what to do with the BEAN THING.
Good Christ, Brock, how did you feel the day after you ate the pillow? What does that do to one's system?
Brock, I'm part water engineer and part vegetarian, and I wouldn't lie to you. I assure you that putting the produce under a stream of running water can dislodge solids that aren't part of the produce from the surface. Sometimes even solids that are clinging to the lettuce by several tiny caterpillar feet.
Maybe Brock has satyr ancestry.
Nice work there, tying in Brock's Neighbors-At-The-Window incident.
I actually agree with Sifu (and have the hubris to disagree with Searle) : I think the Chinese room understands the Chinese language.
My remark at 60 was not an effort to explain why natural language processing is impossible, but merely to explain why it is difficult.
Many early AI investigators optimistically thought that we'd have HAL-like machines conversing with us by the turn of the millennium, and the popular imagination and SF writers were, I suppose, guilty of inflating these expectations. However, optimists were repeatedly confounded by the difficulty of reliably resolving the ambiguities in natural language that humans resolve through "common sense" -- the required background being so innate in humans that few seem to have understood the extent of the required knowledge of the world and of the enormous context that we bring to bear on determining "what makes sense".
The problem of giving computers common sense has turned out to be a difficult one.
Sifu's remark that computers have senses is, I think, a bit of a stretch. Computers have "sight" through cameras, but sight is just a part of vision, and no one yet has a good model of the complex image processing done in the first layers of the visual cortex. People aren't cameras -- much of sight lies in discrimination: in which detalis we notice and which we suppress.
No computer I know of has even a scrap of the kind of integrated touch/pressure/hot/cold/pain/pleasure sensory membrane that covers my entire body.
I'm not saying these problems are insuperable; I don't think that they are. I am saying that early optimists were suprised at the number of such problems and at their depth, and that to my knowledge the problem of making a computer understand natural language and respond appropriately using natural language is still considered difficult after decades of research.
I'd love to talk with HAL or Shalmaneser before I die, but I don't expect to have the opportunity.
I'm part water engineer and part vegetarian
So you call your mom "Mwater" and your dad "Potater"?
Dec. 15, 2005: OR-OSHA issues five additional citations against Threemile Canyon Farms. Inspection documentation obtained by the agency establishes Threemile has a policy of denying OR-OSHA compliance officers on-site unless they have a warrant. The Farm later confirmed this policy in the February 3, 2006 article titled; "State Cites Threemile Canyon Farm over health, safety issues" in the Tri-City (WA) Herald newspaper. The violations include: 1) Not providing enough bathrooms for employees 2) Failing to provide workers with information about pesticides used in the break room, 3) Allowing workers to eat their lunches in the break room where an insecticide not approved for use around humans had been applied, 4) Allowing trash receptacles to overflow, 5) Not maintaining screens in the break room windows (OR-OSHA inspection number 308460757(93))
You know what happens when you don't provide bathrooms, or when you forbid your workers to take bathroom breaks? They go to the bathroom in the fields, with no place to wash up after themselves.
It's appalling for the workers, and it's not very nice for the people eating the produce later.
Don't bring those things near my family.
209: Then why does washing your hands with just water not do much good?
204: well, it depends how you define emotion. Robots have been programmed to respond to aversive or attractive stimuli in basic conditioned-response kind of ways for decades, so if you think lobsters have emotions, then there you go. If you define emotion as human-like reactions with physiological elements and tears and whatever, then no, nobody's simulated that particularly, but there has been a lot of work in the affective computing realm at making robots that learn behaviors that are recognizable to human interlocutors as emotional response (see e.g. kismet). If you define emotion as a complex learned reaction to stimuli that causes changes in decision-making strategy or changed sensory profile or whatever, there's been lots of work, but you wouldn't particularly recognize it as "computer being programmed to have emotions", as it's mostly on the level of modeling behavior. But if you wanted to make, say, a chess program that got really angry when it lost and subsequently played shitty, I could point you to some relevant papers.
If you define emotion as, you know, the gladness suffusing the heart of the devout man as he gazes upon a lily in the morning dew, well, no, the devout-man-gazing-upon-a-lily box thought-experiment remains intact.
209: DON'T LISTEN TO HER, BROCK! YOU CAN'T TRUST PEOPLE LIKE THAT! YOUR METHODS WORK GREAT. JUST STAY YOUR COURSE.
211: I think we actually aren't disagreeing; you're saying that the kind of holistic simulation projects that are traditionally defined as "AI" are extremely difficult and far off, and I totally agree. I'm just saying that there's no individual piece that we aren't learning more about how to model all the time, and all of those pieces are basically (as far as we know) susceptible to being fruitfully understood through computational models.
I usually don't put it that way because it seems nuanced and dismissive, when the truth is that the amount of progress that's been made in just the past 10 years in modeling tasks that were previously thought to be intractably "human" and judgment-based and whatever is pretty astonishing. The grand failure of the Minsky-Chomsky nexus of big-thinking wrong people is no longer particularly relevant to what is or isn't possible; they just had the wrong approach.
For the record, I believe the lettuce in which the caterpillar resided had in fact been washed. Not very enthusiastically, maybe, but still.
You know, them salad spinners'll set a caterpillar whipping right out of the lettuce, Brock. It's like a carnival ride for them.
Washing your hands with water doesn't get off loose pieces of dirt and whole caterpillars?
If you're talking about dirt that stays after putting your hands under the water, I'd guess that ionized clay particles are adhering more closely to the oils on your skin than to the slightly ionized water. But lettuce doesn't have a charged surface, to my knowledge, so I bet washing vegetables is more effective than washing your hands. I also think that for your purposes, getting any larger clumps of dirt or uppity caterpillars off the lettuce would suffice.
221: STOP HARASSING THE POOR MAN! HE KNOWS WHAT HE'S DOING!
AND WHO ARE YOU CALLING UPPITY, MISSY?
Calling you uppity, tubeworm.
Please do not send a plague.
I'd love to talk with HAL or Shalmaneser before I die, but I don't expect to have the opportunity.
Joel Hanes was a yonderboy.
I have a beer that had been forgotten in my car's trunk in hot weather for weeks; I rediscovered and put it in my fridge yesterday, and I expect it will be fine when I drink it tonight. As a gesture of solidarity with Brock.
I only like artisanal hand-selected butterfly poop produced by butterflies that are not coerced or caged.
You can't coerce butterflies. Butterflies are free! and flighty.
218 : yes, I think we're in vehement agreement.
What do you think of the book recommended at 196 ? I'm years out of date, and could use a concise refresher.
As for 201 and "friendly, non-judgmental, relentlessly positive" -- apparently you boot from different ROMs when working at the Institute.
Veering wildly into the washing vegetables thread : the kinds of bacteria that live on your skin are more likely to cause sickness than the kinds of bacteria that live on lettuce -- unless the lettuce is contaminated a la 213.
The cell membrane of most bacteria is made of a "lipid bilayer" -- it's a grease bubble. Also, healthy human skin is covered with sebum, a light grease that tends to protect bacteria unless you use soap (soap has the wonderful property of dissolving grease, including lipid bilayers -- it kills bacteria by breaking down the cell membrane)
I find insufficiently-washed lettuce to be inedibly gritty. I buy it, separate the leaves, wash each one thoroughly on both sides, shake it quite dry, and then wrap it in paper towels for later use. Maybe this is not encouraging to the lazy produce-eater, but lettuce wants washing.
I would probably not drink that beer without a taste test first, but that's because I believe in so-called skunking, which may or may not have been exposed as an urban legend by now.
Brock, it really is useful to wash your veggies, especially greens. Dunk them in a bowl of water a few times, then into the salad-spinner, yay! They'll last longer that way, too.
What do you think of the book recommended at 196 ? I'm years out of date, and could use a concise refresher.
I don't know it. I can recommend textbook-y books, but I haven't read any lay overviews in a long time.
As for 201 and "friendly, non-judgmental, relentlessly positive" -- apparently you boot from different ROMs when working at the Institute.
It's just so difficult to model these things. Computers will never be friendly, non-judgmental and relentlessly positive like humans are.
I would probably not drink that beer without a taste test first
Maybe I'm giving PV more credit than due, parsi, but I suspect he or she is going to smell it and/or sip it rather than going for a pop-the-top-and-start-chugging approach. But, hey, I could be wrong.
222: But lettuce doesn't have a charged surface, to my knowledge,...
And you call yourself a science type person. Get some romaine and a voltmeter, then comment.
oops. 228 was me
Jesus. I messed up my back, And apparently my comment boxes.
What is a drink of beer, if not a taste test? Or did you mean you wouldn't drink it without a third-party taste test?
I believe in so-called skunking, which may or may not have been exposed as an urban legend by now
I have no idea what this sentence could mean. I've always understood "skunking" just to mean beer that went bad (or perhaps never was right), whatever the cause. How could that be an urban legend? I've had bad beers plenty of times. Does the term "skunking" refer to some more specific phenomomnom that I'm unfamiliar with?
I thought skunked beer had froze and thawed too many times, or something.
I can recommend textbook-y books
Would you then, please ?
241 et seq.: it's caused by light.
241: I think one time is too many times, especially if we're talking bottled beer.
Text Booky Boo-ook! (Bow-wow.)
From the link in 242:
And it's been said that bottled beer can become light-struck in less than one minute in bright sun, after a few hours in diffuse daylight,
This is nonsense. I've had beer that's been sitting in ice-water, out in the sun, for plenty long and it tasted like the champagne of beers.
I've encountered a new low in problematic hotel wifi. Everytime I try to go to a webpage I get redirected to a Qwest Consumer Protection Program page that insists my computer has a possible virus on it and asks me to remove said virus.
I suspect he or she is going to smell it and/or sip it
I know. I spoke quickly. I meant that I'd view that beer with a wary eye, that's all.
I was trained to think that skunked beer was that which had been fully chilled and then brought back to room temperature, and even subjected to heat (in a closed car, say), and then chilled again later.
I really think this is probably bullshit. Ish. Though I've had beers that don't seem to stand up well to that kind of treatment.
242: by the complex process of looking behind me, I come up with Bishop's book, which is great and totally readable and gets you pretty much through kernel methods, Russell and Norvig's book, which is older but also pretty good (and Norvig is chief scientist at Google), and Sutton and Barto's book on reinforcement learning.
248 before seeing 243. Interesting!
249 : thanks. Just what I was looking for.
237: Why don't you remove the virus from your computer, then?
Who else has tried to start drinking and found that the case of Old Milwaukee Light (cans) is frozen? Because that's what happens when you put a case of beer in the trunk of a car and the day's high temperature was still 30 degrees below freezing. Of course, you can't bring the beer inside to thaw for the same reason that you couldn't remove it from the trunk. You're 16 years old. The next thing you know, you're driving in a car with six underage peers, the defrost going on high, eight cans of beer on the dash, the windows open so you don't cook. Of course, no worries as you've already driven by the house of the only cop and he's home. Your sanest friend is in the back, muttering that every on coming set of headlights could be a state trooper. And, just after the illegal U turn, he's right.
MY MASTER HAS TOLD ME TO CRY.
246: Miller and Corona have negligible hopping, so they're not much degraded by UV.
237: Why don't you remove the virus from your computer, then?
Soap to dissolve its lipid bilayer doesn't work, because capsids are made of protein.
Luckily either someone at the hotel has shut down the program that was using the hotel wi-fi to send spam, or some hotel employee dealt with Qwest somehow.
253: Are you writing the midwestern suburban version of Less Than Zero?
257: Suburban? We were three hours from the nearst suburb.
Moby brings authenticity when Unfogged talks about hay.
I've spent an hour or so (not recently) trying to find that thread. As far as I can tell, it's gone.
I don't intend to be defending the claims of strong AI, necessarily, by the way.
well, that's nice, and I don't intend to claim that computers can't do lots of cool stuff, and won't do cooler stuff in the future. In that sense we don't disagree.
The goal of exactly simulating human consciousness is either impossibly far off, impossible, or merely deeply silly. But to leap from that to "well, natural language processing is pretty much impossible" or "you can't model emotions in a computer!" bespeaks a vast ignorance of what's actually happening in machine learning.
I never said natural language processing is impossible or that you can't model emotions. My only point is that you shouldn't *assume* you can do this just because you have a lot of computational power. Maybe computation is just so different from what's going on when people do their language thing or react to their emotions that there will be insurmountable complexities going from one to the other, so your computational parallel to people talking and feeling will forever be liable to bizarre slip-ups or have to be structured by programmers for narrow tasks. Emotions are central to human cognition after all. We really don't know, and I suspect we are several basic breakthroughs away from knowing. As I understand it, it's not very controversial to say that a computer will never be able to predict the exact weather at a specific location six months in advance, because the system is just too complex and cannot even in principle be cracked by computing power. How do you know the brain is simpler than the weather?
If you define emotion as a complex learned reaction to stimuli that causes changes in decision-making strategy or changed sensory profile or whatever....If you define emotion as, you know, the gladness suffusing the heart of the devout man as he gazes upon a lily in the morning dew, well, no, the devout-man-gazing-upon-a-lily box thought-experiment remains intact.
Is the point here that "complex learned reaction to stimuli" is man talk suitable for the rat experiments in the hard sciences, but "devout man gazing upon a lily" is pussy shit for liberal arts majors so it's probably not important anyway? Emotions are emotions. That whole comment 216 was, ummm, highly reductionist.
Because you wanted to re-read the best conversation Unfogged ever held? I understand.
I've spent an hour or so (not recently) trying to find that thread. As far as I can tell, it's gone.
Yahoo frequently works better than google for finding things in the archives.
It's the only thing for which I use Yahoo search.
As I understand it, it's not very controversial to say that a computer will never be able to predict the exact weather at a specific location six months in advance, because the system is just too complex and cannot even in principle be cracked by computing power. How do you know the brain is simpler than the weather?
Whoa there. Predicting the exact weather is difficult because of chaos: you have to know initial conditions very very very precisely to get the right answer. This doesn't mean the complexity isn't simulable in principle; you could produce something that simulates the complexity perfectly well, and over an ensemble of runs accurately predicts the statistical properties of real weather. You just can't give it accurate enough inputs. So there's no lesson you can draw from this about the brain: it's true that even with the most advanced imaginable brain simulator, you wouldn't be able to predict what I'll be thinking in six hours. This doesn't mean you can't imagine a brain simulator that accurately models everything a brain does.
How do you know the brain is simpler than the weather?
Well, the brain is smaller than the weather.
252 fails the Turing test
247: I just experienced something similar on my Fargo trip last month. Trying to use a tiny, portable computer to access a network which will allow you to communicate with a friend who is only able to communicate via computer messages because she is being paid a large amount of money to stay in a prison-like medical research facility for three weeks, and then being unable to because the network provided by your hotel is unaccountably paranoid and stupid: A problem the science fiction writers of the 1950s would never have imagined. Fargo is a harsh mistress.
Bing seemed to be working better than Google for Unfogged searching for a while, but lately I haven't had much luck with it. Maybe they're aping Google too well now.
Speaking of Bing, today a random elderly stranger out of the blue told me I looked like I could be "Bill Gates' little boy". In response to my sort of quizzical gaping she said "it's a compliment!" and I mumbled a confused thanks and walked away as quickly as I could.
254: If my understanding is correct, no finite degree of precision in specifying initial conditions is sufficient to allow accurate simulation of a true chaotic system. That's actually the definitition I have for "chaotic" : arbitrarily small differences in initial conditions can produce arbitrarily large differences in system behavior. By this understanding, we may someday have much better ten-day forecasts, but we will never have an accurate thirty-day weather forecast.
Interestingly, computational chaos was first discovered by a weather modeler.
Thank you NickS. Yahoo could not have been easier. I think I found it, but it seems shorter than I recall. See here.
254: If my understanding is correct, no finite degree of precision in specifying initial conditions is sufficient to allow accurate simulation of a true chaotic system. That's actually the definitition I have for "chaotic" : arbitrarily small differences in initial conditions can produce arbitrarily large differences in system behavior. By this understanding, we may someday have much better ten-day forecasts, but we will never have an accurate thirty-day weather forecast.
This is a practical limitation, not an in-principle one. In principle, the better you measure the initial conditions, the longer into the future you can forecast the weather. However, small errors in the initial conditions grow exponentially and overwhelm the long-term accuracy of your simulation, so small improvements in results require very large improvements in input.
All that aside, you seem to be missing my point, so maybe I didn't make it well enough. The question about modeling the brain isn't like the question about predicting the actual real-world weather at some point the future. It's more like the question of modeling something that has all the same properties as weather, without having to get the inputs of the current state of the real world exactly right. And that's a tractable problem.
well, that's nice, and I don't intend to claim that computers can't do lots of cool stuff, and won't do cooler stuff in the future. In that sense we don't disagree.
In the sense where you either didn't read or didn't understand what I said, yes, I suppose so.
Your next paragraph is really quite uninformed, but I will try to unpack it.
[... partial, but already endless unpacking omitted ...]
No, you know what, I'm actually not. You don't know what you're talking about, and you don't seem interested in what you're talking about, so engaging you is going to be one of those horrible arguments where you keep saying the same thing and thinking it's a point, when in fact you're just asserting the same things over and over. You can't just say "well, maybe emotions and language are something that can't be modeled computatioally" if you have no idea of what scientists currently believe about what emotions and language actually are. I mean, you can, but then your argument boils down to "but what if science is wrong?"
Well, yeah, good point. What if everything we know is wrong? Fuckin' magnets, how do they work, right?
If my understanding is correct, no finite degree of precision in specifying initial conditions is sufficient to allow accurate simulation of a true chaotic system. That's actually the definitition I have for "chaotic" : arbitrarily small differences in initial conditions can produce arbitrarily large differences in system behavior.
This is my definition, too. Somewhere on the web I got into a big argument because I maintain that the Jurassic Park explanation - "Chaos means that a butterfly flaps it's wings in China and we have a thunderstorm here!" is wildly misleading. There's nothing in the definition about having far-reaching impact. You could have a chaotic function that only took on values between 2 and 4. And now I've resusitated the argument and I'm having it with myself.
By some definition of "computer", sure it is. How that is meaningful or useful is of course a much bigger question.
How about "not more powerful than a Turing machine" (or, more obviously, less)? I think that's a significant statement, given certain popular opinions.
Maybe PGD is secretly Roger Penrose. The brain is doing NP-complete things! The brain is quantum! Gravity collapses the microtubule wavefunction buzzword buzzword doddering old age!
Is the point here that "complex learned reaction to stimuli" is man talk suitable for the rat experiments in the hard sciences, but "devout man gazing upon a lily" is pussy shit for liberal arts majors so it's probably not important anyway?
No.
Emotions are emotions.
Fine. Could you define them for me, please? Cast your definition in terms of both subjective experience, intersubjective reality, physiological and neurological implications, behavioral effects, and the epistemology of all of these. Then I can respond to you meaningfully about whether or not computers can model them. The point of the dewy lily example is that it consists a vast framing, which implies that you have to solve the problem of simulating an entire human consciousness, embedded in the world, in order to model emotions, and by any meaningful physiologica/neurological definition of what "emotion" means, that's not true.
That whole comment 216 was, ummm, highly reductionist.
I love that you say this right after saying "emotions are emotions".
Fuckin' magnets, how do they work, right?
They hold construction paper and pizza coupons to the fridge.
274: not that Penrose isn't also annoying, but I think that just may be giving PGD too much credit.
In principle, the better you measure the initial conditions, the longer into the future you can forecast the weather.
This does not accord with what I had thought I understood.
Rather than argue (I'm no expert, and often discover that I am misinformed) I shall quietly consult The Authorities for my own satisfaction. If I discover that I'm mistaken, I'll 'fess up.
264 and 270 are exactly right.
And, so long as we're mentioning AI textbooks, there's a free draft copy of Sutton and Barto's great book online.
274 : we agree about Penrose, at least. What a shame.
272: The definition usually requires something like the existence of points with dense orbits, which is kind of like "far-reaching", though at the moment I'm not coming up with a good reason for this to be a necessary part of the definition. Maybe just because otherwise a simple exponentially growing function would be called "chaotic" when it really isn't.
"Topological transitivity" is the technical term.
278: See lecture 4 here (plus readings).
The relentlessly positive non-judgmental simulation problem has proved surprisingly difficult, admittedly.
I suppose it's easier to accept the word of some guy on the internet when he uses his real name.
285: Cosma can assign homework. That's always helpful.
279 : I believe I sit corrected. Thanks.
281: But something like the double-jointed pendulum is chaotic, but certainly has bounded range. Dense orbits doesn't mean it's not a compact space or anything.
I love that you say this right after saying "emotions are emotions".
Why? That's a highly antireductionist thing to say.
281: Maybe just because otherwise a simple exponentially growing function would be called "chaotic" when it really isn't.
Yes, exactly: you want to get at more than just exponential separation of initial conditions, towards the idea that an arbitrarily small perturbation can induce (eventually) any qualitatively permitted pattern of behavior. This is why the usual definition invokes a dense orbit plus infinitely many periodic orbits.
"Topological transitivity" is the technical term.
I thought this meant a map between topological spaces which commuted with a group action.
261
... Emotions are central to human cognition after all. We really don't know, and I suspect we are several basic breakthroughs away from knowing. As I understand it, it's not very controversial to say that a computer will never be able to predict the exact weather at a specific location six months in advance, because the system is just too complex and cannot even in principle be cracked by computing power. How do you know the brain is simpler than the weather?
You can't predict the weather 6 months in advance because it is basically random. So if there is a similar random component in human behavior you won't be able to predict it (other than in a statistical sense) but you may be able to simulate it perfectly well. Just as a computer can simulate tossing a coin in a way that is effectively indistinguishable from an actual sequence of coin tosses but cannot predict the exact results of the actual sequence of coin tosses.
288: Oh, sure. But it's "far-reaching" within the space it's defined on. After all, the Earth is compact, more or less.
289: because I got confused, that's why.
275.last stricken from the record, and replaced with a complaint that PGD seems to be dismissing the bulk of modern brain science as reductionist, which, okay?
293: Sure. My beef is that the hand-wavy butterfly nonsense would lead a lay-person to believe that the key idea is that a tiny flap can have a wildly gigantic effect. Which, as you say, is true of an exponential function. So it's a poor heuristic.
Oh, christ, my stomach hurts from laughter and my eyes are wet, Brock Landers. If my eyes wore pants, they would be ruined, and I would send you the bill, Brock Landers. I love this thread.
I don't know much about AI but Erik Mueller was a family friend when I was growing up, and I gather he's done some important work in the field. We had a beautiful woodcut by his dad of Ben Shahn's face (link seems to be dead), five foot tall, hanging in the hallway. It used to scare the bejeezus out of me.
285: Like this is a plausible real name?
286: Not as helpful as you'd think, he muttered darkly.
291: It may be that also, but not in ergodic theory or dynamics.
We should have an Unfogged journal club discussion of Sharkovii's theorem. Starting with figuring out the most reasonable way to spell S(h)arkovski(i)(y?).
Of course, we won't really have solved AI until our computers know to eat hamburgers left in fish tanks.
Oh, balls, I just commented in the year-old hay bale thread.
270
This is a practical limitation, not an in-principle one. In principle, the better you measure the initial conditions, the longer into the future you can forecast the weather. ...
I believe the real problem with predicting the weather is that noise (solar constant not really constant for example) is continually being introduced into the system. Otherwise you might be able to get around the initial conditions problem by taking into account the entire time history of the system.
301: Fair enough. Initial conditions + boundary conditions, if you like. I can't say that for weather I have a clear sense of which effects are the first to cause things to go wrong.
Otherwise you might be able to get around the initial conditions problem by taking into account the entire time history of the system.
Oh, well, I already did that, it doesn't help.
but not in ergodic theory or dynamics.
Oh, right, I knew that definition. At some point.
301: Well, obviously the Earth isn't a closed system. It can be treated as such as an approximation, but to be completely accurate (in principle) you have to start including larger and larger subsets of the hubble volume in greater and greater detail. Whether you treat those as part of the system or boundary conditions is probably not a substantial distinction.
Oh, also. My 273 totally pwnd essear's 274.
You're all pwns in my deterministic game of life. Dance, my pretties.
306: Might you have been thinking of topological conjugacy?
Well, the brain is smaller than the weather.
but wider than the sky
This sounds like something out of Desk Set.
310: Perhaps. Let's never let my old advisor see this thread. It's embarrasingly close to what I actually did my dissertation on. And then promptly forgot all my definitions. Apparently.
307: If you're going to that level of precision, you (or your prediction machine) are gravitationally coupled to the weather, raising all kinds of interesting problems.
314: In some ways, that sounds like an ideal dissertation.
I took four years off and am now wading through the dissertation that built on mine (but not the dynamical systems part; a cohomology part), and it's hard and painful and ugh. I was definitely not cut out for the research track.
Huh, I guess I missed that hay-baling thread the first time. I probably would have had something to say.
There are standard conventions for transliteration. Even computers can do it!
The relentlessly positive non-judgmental simulation problem has proved surprisingly difficult, admittedly.
Prickly has been a rising success, however.
321: Getting the emotions right is hard work.
I remember wanting to pull up a quote from some Hamlin Garland story when I read the hay thread, but I was too lazy. Also, wasn't sure if the passage I found was as relevant to whatever comment that made me think of it as I thought it would be, or if it was the right passage at all. Anyway, there are some good agrarian set stories in Main Travelled Roads
I just read the Brock section of the thread again and hurt myself laughing again.
I know this thread is about other things now, and was in fact created solely to bring the Brock section into existence, but in this section of the article,
What makes language so hard for computers, Ferrucci explained, is that it's full of "intended meaning." When people decode what someone else is saying, we can easily unpack the many nuanced allusions and connotations in every sentence. He gave me an example in the form of a "Jeopardy!" clue: "The name of this hat is elementary, my dear contestant." People readily detect the wordplay here -- the echo of "elementary, my dear Watson," the famous phrase associated with Sherlock Holmes -- and immediately recall that the Hollywood version of Holmes sports a deerstalker hat. But for a computer, there is no simple way to identify "elementary, my dear contestant" as wordplay.
"We" and "people" are doing a whole lot of work there. Someone who hasn't read Sherlock Holmes or been exposed to catchphrases therefrom is in a similar position to that of the computer.
274 - Respect your elders; that toilet paper didn't tile itself.
||
Dear Drupal: I hate you. No, that's not really true. I hate me, for missing something incredibly obvious that would have allowed me to stop working prior to 11:30 at night. But I'm feeling sorry for myself, so you'll have to take the blame.
|>
[[
I don't believe I ever realized the extent to which the internet made living alone possible for me. People! It is so isolating to only be attached to the rest of the world with (*gasp*) your phone. (But I did get to talk to my grandpa today. Yay.)
[>
I'm back...perhaps I shouldn't comment again if it's going to set Sifu off. Things seem to be getting rather emotional. Anyway, after this comment and the next I'll bow out.
last stricken from the record, and replaced with a complaint that PGD seems to be dismissing the bulk of modern brain science as reductionist, which, okay?
yes, that's right. The scientific method is reductionist, and that's OK, when it gets cashed out for something helpful. But it's not so helpful when people start making scientistic authority claims about stuff science actually does not understand.
The point of the dewy lily example is that it consists a vast framing, which implies that you have to solve the problem of simulating an entire human consciousness, embedded in the world, in order to model emotions, and by any meaningful physiologica/neurological definition of what "emotion" means, that's not true.
Why not? It seems pretty obvious to me that any higher-order emotion is going to depend in a meaningful way on a large fraction of the individual's entire emotional state, and also on a substantial amount of their past experiences and emotional history. It's true that the need to simulate the whole consciousness to fully understand its parts could makes it hard to do science, and in many ways one would need to abstract away from that to make some progress. (Although as I understand it a lot of neuroscientists do feel it's necessary to tackle the system level directly). But it's an empirical question how much you gain or lose by doing that. I'm not sure science has demonstrated that the process of abstraction it engages in with respect to the emotions and personality has paid off yet, at least not judging by all the crudely reductionist articles about brain scans, neurotransmitters, and the genetic determinism of everything that I've been reading over the past 15-20 years.
Of course, no one would want to stop neuroscience, even if you'd want it to be more modest in its claims today.
You can't just say "well, maybe emotions and language are something that can't be modeled computatioally" if you have no idea of what scientists currently believe about what emotions and language actually are. I mean, you can, but then your argument boils down to "but what if science is wrong?"
This is just a pure appeal to the authority of science, which is grounded in successes of the physical sciences which have so far not been replicated in the human sciences. Are people happier and wiser today because science has cracked emotion? The successes in language are most impressive in a technological / usefulness sense (artificial translators, etc.), it's not like computers are anywhere close to self-generating interesting language. I don't think that current scientific definitions of "what emotions and language actually are" would, if modelled computationally, get you very far toward the real thing, hence the appeal to science as the source of authority here is the appeal to physical science successes that may or may not end up applying here...even models of purely physical / medical biological processes are a long way from complete.
This whole dustup started in an argument in another thread about the singularity, which is the ultimately silliest form of deep AI. Against that backdrop I think my perspective has a lot to recommend it, but as you move away from that toward "hey, let's just see how far we can push computers and how we can operationalize some cool theories about cognition" it does, in fact, get closer to pointless luddism. But I think the problems with doing more ordinary AI stuff are related to the complexity of consciousness in general, which I thought was a point well made in comment 60.
That is very long and does not mention hay.
emotions and personality has paid off yet
How many suicides have Prozac and other crudely reductionistic SSRIs prevented? It's not a complete understanding, but it beats shock therapy. Gary Marcus writes very nicely about brain science, does not seem crude or overreaching to me.
it's true that even with the most advanced imaginable brain simulator, you wouldn't be able to predict what I'll be thinking in six hours. This doesn't mean you can't imagine a brain simulator that accurately models everything a brain does.
Yes, there are different senses of simulation I was confusing there. But: when we've built our brain simulator, it will be possible that we've built something that cannot feel subjective experience, so is not a brain itself, and also will be limited in its usefulness in predicting what any actual brain will feel, think, or do. I'm sure that will be a very useful device, with some programming interventions it would make a very useful robot, and it would be unimaginably advanced compared to science today. But you know, maybe not quite as awesome as you'd think.
Poor Parenthetical. Even her pause/play symbols are frowny.
at least not judging by all the crudely reductionist articles about brain scans, neurotransmitters, and the genetic determinism of everything that I've been reading over the past 15-20 years
Are we talking about peer-reviewed science articles here, or articles in the popular media?
That's some hay, but not very much.
Teofilo is much more efficient in his hay posts than PGD.
stuff science actually does not understand
What is this stuff, and how do you know that?
It seems pretty obvious to me that any higher-order emotion is going to depend in a meaningful way on a large fraction of the individual's entire emotional state, and also on a substantial amount of their past experiences and emotional history.
Define "higher-order emotion". Anger? Jealousy? Schadenfreude? Melancholia? Humility before a truly exceptional piece of architecture? Love? The joy of discovery?
I'm not sure science has demonstrated that the process of abstraction it engages in with respect to the emotions and personality has paid off yet, at least not judging by all the crudely reductionist articles about brain scans, neurotransmitters, and the genetic determinism of everything that I've been reading over the past 15-20 years.
So, just to be clear, you're not talking about actual scientific research, you're talking about popular science articles in newspapers and so on, based (most likely) on press releases from university PR departments, and skewed towards those institutions that are best able to work the science reporting system, yes?
This is just a pure appeal to the authority of science, which is grounded in successes of the physical sciences which have so far not been replicated in the human sciences.
Are you kidding? Do you have polio?
re people happier and wiser today because science has cracked emotion?
I imagine a non-trivial proportion of those receiving psychiatric treatment (as well as their families) would answer in the affirmative, as long as you define "cracked" as "gained some partial understanding of".
it's not like computers are anywhere close to self-generating interesting language
Could you offer some evidence for this assertion?
I don't think that current scientific definitions of "what emotions and language actually are" would, if modelled computationally, get you very far toward the real thing
And what are those definitions?
This whole dustup started in an argument in another thread about the singularity, which is the ultimately silliest form of deep AI
Leaving aside that "deep AI" isn't a thing, I don't think, this is exactly what irritated me about your previous comment. The singularity is, indeed, a massively stupid premise, but your counterargument fought stupidity with uninformed assertion, which is hardly better.
My endlessly long response to PGD's endlessly long comment was partially pwned. OH WELL: hay!
The human sciences haven't even been able to keep you from getting pwned in your own house!
339: Judging from the irrigation rig and the trucks, that is less than 2,000 acres.
342: Well yeah, but there's only so much you can fit in a single photograph.
Are we talking about peer-reviewed science articles here, or articles in the popular media?
Well, scientific journal articles are usually much more carefully qualified and there's a dialogue that can be self-correcting over time. But there's a connection between scientific articles and how it ends up getting interpreted in the popular media...the "depression is just like diabetes, your brain is short of serotonin!" era didn't come out of nowhere. The cultural authority of science has a lot to do with it. The reductionism inherent in the scientific method, which can actually lead a lot of working scientists to be pretty modest (in my experience, anyway) gets imported over into how people think about things in the popular culture.
328: It really is amazing how much the internet can become part of one's life.
Anyway, here's the whole thing. It's not all hay, but I think most of it is.
346: And it's like it's made the future totally unpredictable.
343: Let's start a campaign to convince people that this is a well-known concept in German called "Frühstückvergnügen".
Well, scientific journal articles are usually much more carefully qualified and there's a dialogue that can be self-correcting over time.
This is part of it, yes. But then there's also the fact that science reporting in popular media gets the actual content of the study wrong probably 80% of the time. Truly, if the only way you learn about current science is through articles in the popular media, you aren't learning about science.
I mean, just to go back to your previous comment, you lump "brain scans, neurotransmitters, and the genetic determinism of everything" in together. Now, neurotransmitters have been well-studied for decades, and while there is certianly lots of new information about them, they are very, very different from fMRI (which is what you meant by "brain scans"), which is a very recent technology that has a lot of promise but is also susceptible to interpretation problems because of (among other things) its somewhat gross temporal resolution, expensive data collection, and the relatively complex statistical analysis required to analyze it. It is also perhaps the most thoroughly misunderstood scientific advance in years, such that reading about it in the popular press will, to a first approximation, teach you absolutely nothing at all. Neither of these things has anything particularly at all to do with the "genetic determinism of everything", except that the latter two concepts are very popular in the lay media.
Discounting the work of scientific researchers based on the portrayal of various unrelated "hip" technologies in the popular press is... well, let's just say it doesn't give your assertions a lot of rhetorical force.
347: Sure, for a desert, that is a lot of hay.
they are very, very different from fMRI (which is what you meant by "brain scans"), which is a very recent technology that has a lot of promise but is also susceptible to interpretation problems because of (among other things) its somewhat gross temporal resolution, expensive data collection, and the relatively complex statistical analysis required to analyze it.
Use a different sequence and put a different body part in the machine, and you have much of my life. Not that we are much worried with temporal resolution.
put a different body part in the machine
If I'd known it was going to be that kind of party...
If I'd known it was going to be that kind of party...
I'd have put my needle in the threshed byproducts.
347: Sure, for a desert, that is a lot of hay.
Also for a dessert. Unless you're a big hay eater, I guess.
350: Speaking of science reporting, I enjoyed this.
There's nothing like eating hay when you're faint
Textbooks just want to be free!
http://www-stat.stanford.edu/~tibs/ElemStatLearn/
http://www.inference.phy.cam.ac.uk/mackay/itila/
http://www.gaussianprocess.org/gpml/chapters/
The above are all machine learning texts. There are many others online; in my stash of PDFs I have books on calculus, linear algebra, analysis, computer vision, spectral algorithms, and more. (If anyone wants them and can't work the Google, email me.)
Good thing there is no-one in the office to hear my guffawing. Go Brock!
I actually agree with Sifu (and have the hubris to disagree with Searle) : I think the Chinese room understands the Chinese language.
Searle is full of shit because his argument is predicated on the assumption that the processes underlying artificial intelligence have to be at least analogous to those involved in natural intelligence to count. Whereas in reality natural intelligence is recognised purely by the observed relationships between inputs and outputs, as is clear from the fact that the concept of intelligence was understood long before people even knew that the processing happened in the brain.
This was written up in proper English in a learned journal by somebody famous, but it's twenty years since I read it so I can't remember who.
(In other news, I'm sick of that stupid pseud, and reverting to my real name.)
Searle is full of shit
I wholeheartedly support this statement.
My reaction, in general, to philosophy of mind is: "Great, you have a hypothesis. Now go do an experiment." But philosophers don't want to do that. They'd rather wiffle about qualia and waffle about zombies than actually validate their theories against the evidence. Science eats philosophy as more phenomena become amenable to measurement, and I expect philosophy of mind will soon fall prey if it isn't already being consumed.
Sadly, humanities types like to combat the infiltration of science by using words like "brains" and "minds" instead of "people." One can, any goddamn day of the week, find some talk to go to by an English professor who wouldn't know neuroscience or linguistics if they bit him on the ass talking about how [author] uses "language" to affect your "brain," with the surprising result of confirming all the literary-critical arguments he's made in his life. No examination or measurement of actual human brains is necessary for this approach.
364. I feared that might be so, but I hoped it wasn't.
How's your foot?
No examination or measurement of actual human brains is necessary for this approach.
Probably a good thing, if they've all been pre-empted by the Faculty of Mad Science.
["It's funny. You don't normally meet many mad social scientists."]
I'm not prepared to fight the 'philosophy is done for, it can't resist teh virile thrusting progression of science' wars all over again.
But yeah, I tend to agree with 361. Then again, I find myself leaning unfashionably* Ryle-wards on this sort of stuff anyway.
* although I expect that wheel is turning/has turned.
There can be no innocents in this war, ttaM.
I don't think philosophy is done for. I can't see a science of ethics any time soon, for example. Certain areas of philosophy might be past their use-by-date, however (but don't tell Brock!)
362: I do too, actually. I just didn't want to go down that particular road.
This whole dustup started in an argument in another thread about the singularity, which is the ultimately silliest form of deep AI
Was I in that thread? If so, I don't remember PGD being involved. Anyway, I resent people beating up on the Singularity, (3) and (2) especially. I'm not really a big fan of (1) though. It's like this conversation here never gets above the level of
Hey, man, have you heard? There's this bunch of, like, crazy nerds out there, who think that some kind of unspecified huge nerd thing is going to happen. What a bunch of wackos! It's geek religion, man.
(OK, maybe a little above that.)
Biscuit conditionals are like compile-time macros.
Sifu's remark that computers have senses is, I think, a bit of a stretch. Computers have "sight" through cameras, but sight is just a part of vision, and no one yet has a good model of the complex image processing done in the first layers of the visual cortex.
Actually, we have a pretty good model of that (for monkeys, anyway). There's a comprehensive description of the (fairly up to date) state of scientific knowledge in this book. What we don't have is a good model for what happens at later stages of the visual processing pathways, and even less of a model for how the mind then interprets and makes use of the results of the processing.
371 : thanks. Always glad to have my misconceptions corrected.
365: Good enough to go to work today, but it may have been too taxing. It's a long commute, a lot of walking, and a big campus. We'll see! Hopefully I'll be able to go tomorrow!
#define ANALOGY_BAN 1
#ifndef ANALOGY_BAN
Biscuit conditionals are like compile-time macros.
#endif
370: It was within the past week, but if you mention the S word Sifu Tweety will get angry.
I thought the prickliness was a feature.
Or hungry, as it turns out. There are biscuits if you are.