I found the math interesting, but I couldn't get past my aggravation that no one person on the team seemed to be willing to understand entire research project. If your assumptions are wildly idealized - and these seemed to be - it's completely ludicrous to think that your economic conclusions are more robust because the math is super high-powered.
You're assuming the point is to come up with robust economic conclusions. The point is to come up with fancy maths to make your already determined conclusions seem more plausible to the uninitiated.
First, assume all people are perfect spheres
Assume perfectly elastic spheres. That will satisfactorily confuse the economists.
it's completely ludicrous to think that your economic conclusions are more robust because the math is super high-powered.
This plan is so complicated that it can't possibly fail!
What was it about, heebie? Can you give more details without violating anonymity?
I think so. I can always redact if the conversation starts to get too identifying.
All I know about the sorts of initial conditions was her example: Suppose all households have the same income.
Typical econ models that she and others work with are the Cash In Advance model and Overlapping Generation model, and I forget what else. You can pull in a bunch of dynamical systems math, using inverse limit spaces to study backward dynamics.
Yeah, we didn't even have any limbs down or anything. Just a big bunch of rain and thunder. My brother lives over on the east side of Raleigh, and the tornado there made a big C around his neighborhood like they'd installed a force field, passing ~1.5 miles to the west of his house.
Suppose all households have the same income.
Communist dreamers!
All I know about the sorts of initial conditions was her example: Suppose all households have the same income.
It's easy if you try.
I think it just means modelling something using inverse limit spaces: that the value at any point determine all values at all times earlier.
But they used "backwards dynamics" in the talk a lot. I didn't make it up. However I didn't trust myself, so I just googled it to make sure I hadn't just invented it.
Postulating that causation flows backward in time just seems like some kind of reductio of all that is silly about economics.
re: 14
Mass-less money travelling faster than light, innit.
It allowed them to compute expected utility on different models. So then you know exactly how accurate your model is! Because of INVERSE LIMIT SPACES!
So they used these inverse limit whoosits to retroactively predict the values of certain key variables using the final values for those variables?
Does that require backwards causation or is it simply the kind of retrodiction any historical science has to do?
I don't know about in this case. But typical long-run equilibria in simple models have saddle point stability. So many economists assume the initial conditions are such that the economy is heading for an equilibrium, not away from it.
I think of this as related to what's called the "Hahn problem" and to Samuelson's "turnpike theorem".
Inverse limit spaces are built out of systems with the dynamics that f(x_{n+1}) = x_n. In other words, there's some sequence of values {x_n}, and if you know any one value, then you can use the map $f$ to determine all previous values. So all the economists said was "We've got a model that obeys that isolated condition."
Then the mathematicians put continuous functions over the set of possible sequences, and integrated these sequences, and somehow used that to determine the utility of the model.
But typical long-run equilibria in simple models have saddle point stability.
But I'm questioning the underlying assumptions in these simple models. Or, it's fine if you want to have wildly idealized assumptions, but then you can't really draw any conclusions from your model without an equally wild grain of salt.
18: Retrodiction is my new favorite word. Thank you.
I was going to make an "I retrodicked your mom" joke but I'm too grateful to you for introducing me to my favorite word to do that.
20 was supposed to be to 18, but I don't think I really answered the question.
integrated these sequences, s/b integrated these functions.
But typical long-run equilibria in simple models have saddle point stability. So many economists assume the initial conditions are such that the economy is heading for an equilibrium, not away from it.
That's astonishingly stupid.
Then the mathematicians put continuous functions over the set of possible sequences, and integrated these sequences, and somehow used that to determine the utility of the model.
That does seem like a reasonably useful techinque.
You get backwards induction from expectations. Agents plan by considering all the possibilities at the end, and then work backwards. So in models, you get two kinds of "causality": you have state variables where you are affected by your prior history (so the ordinary forward causality), and planning variables where you decide what to do today based on what you think is going to happen tomorrow (so a kind of reverse causality, which only occurs in your mind when you plan).
Wait, so you're determining the utility of the model based on how well it describes the present based on its prediction space? Am I getting that right?
Suppose all households have the same income.
||
Coincidentally, I was looking for the household income numbers last week.
It was surprisingly difficult for me to find them on the census web site so, in case anybody else is curious, here is the data.
|>
Well, she was squirrelly on this point, because she wouldn't say where α and β came from. So I couldn't ever quite link up the math with reality. But yeah, that was basically my take-away.
26: Robert's comment is not quite right, though what economists actually do is probably equally stupid.
Robert is thinking of a kind of static equilibrium. If agents are sufficiently patient, or random shocks to the economy are sufficiently small, then in standard models there is a deterministic path that the economy trends towards. If you think of that deterministic path as a static equilibrium, then under certain assumptions, the economy would head towards that equilibirum, and then stay near it for all time. This is where results like the "turnpike theorem" come in.
But nothing modern macroeconomics requires this. Instead, imagine everybody in the economy had a complete list of every possible future state of the world. Imagine also that in every possible future state, everybody knew what prices would be in that state. Then everybody could plan their future decisions on different states, taking prices into account. Then assume that prices in every state are always such that there is no wasted supply, and no unmet demand. Given that, you can calculate the actual supplies and demands in every state, given prices, and then you choose prices such that supply equals demand. Viola, you have solved a modern macroeconomic model. This isn't a long-run equilibrium in the sense that Robert means it -- the economy is in equilibrium at all times.
If you assume everybody in the economy is identical, and has identical possessions, then everybody will make the same decision in every state, so you can treat the model as if there's only one person in it, which means everything reduces to a single person who's making optimal choices.
32.2, .3: so what does that get you?
If you're Robert Lucas, it gets you the Nobel Prize.
32.2, .3: so what does that get you?
An inordinate amount of influence over policy and an intellectual Get Out of Jail Free card when your advice and schemes plunge the world economy into crisis.
33. If you've got two dollars, it'll get you a cup of coffee.
If I assume that Robert Lucas needs to get out more, what does that get me?
Actually, I'm not quite sure what you're looking for in an answer. It gets you a complete specification of the economy in terms of individual tastes and technology. (That's a slight exaggeration in that you get a system of equations that has more than one solution.) You can treat a dynamic economy like it's a static economy, in that if everyone today knows that the price of onions in 2050 is $5 a pound, that's just as good as if you could pre-order onions today for 2050. You can conclude that under the right set of expectations (where everyone correctly forecasts market-clearing prices) that there is no positive role for government in the economy.
32.2, .3: so what does that get you?
Is it supposed to be a way of letting you test the accuracy of your model ahead of time? Like, the idea is that if it predicts the present accurately, then you should trust its predictions about the future?
It gets you a complete specification of the economy in terms of individual tastes and technology. (That's a slight exaggeration in that you get a system of equations that has more than one solution.) You can treat a dynamic economy like it's a static economy, in that if everyone today knows that the price of onions in 2050 is $5 a pound, that's just as good as if you could pre-order onions today for 2050. You can conclude that under the right set of expectations (where everyone correctly forecasts market-clearing prices) that there is no positive role for government in the economy.
Oh dear. I sure don't understand why or how, because I am a statistics dolt.
37: but given that it's predicated on at least one evidently nonsensical assumption, I'm still puzzled.
40: I'm not sure there's a single assumption, at least as stated by Walt, that isn't obviously false.
||
Apparently we've been judging these things all wrong.
|>
37: but given that it's predicated on at least one evidently nonsensical assumption, I'm still puzzled.
"No, look, it doesn't matter if the assumptions are nonsensical as long as the theory makes correct predictions. Do you think Newton cared about accounting for every crater and hill on the surface of the moon when he wrote his gravitational equations? Of course not. That's how my model works as well. And if it doesn't make correct predictions it's because the government is distorting the system being modeled. And also I am just like Isaac fucking Newton, OK? So shut your pie hole."
42, I think we have being applying a simplified version of the Ledger Rule.
43: well, right, but I'm trying to be open-minded here. I can think of plenty of models in fields I'm familiar with where they only really made progress by ditching some of the assumptions required for (biological/natural) plausibility (neural nets come to mind) but then were able to gain more information that might be useful for creating more plausible models in the future; I'm wondering if this is that kind of thing? First you have to show that you can create a closed macroeconomic model under some set of conditions, and then you can try to make that closed system more representative of facts in the world?
Perhaps, but economics seems to be particularly prone to confusing the map with the territory, and those maps tend to be of the "assume there be dragons" variety.
So they used these inverse limit whoosits to retroactively predict the values of certain key variables using the final values for those variables?
No doubt I am just intellectually lost here, but this sounds a lot like how I got through high school chemistry labs: figure out what the result is conceptually supposed to be and back engineer the experimental data so as to get that result. Same thing? Or is this different?
Different; I didn't do that until my college chem course.
45: Yes, I generally agree with this. If I temporarily suspend awareness that we live in this actual world, of course they work with "wrong" simplified models that are not necessarily worse than other social sciences. But in the actual world we live in there is a lot of 43 & 46. I'm halfway through The Battle for Human Nature: Science, Morality and Modern Life by Barry Schwartz (recommended by someone at CT) which is somewhat relevant:
However, the book finally argues, we cannot expect the errors of these disciplines [evolutionary biology, neoclassical economics, and behavioral psychology - JPS] to be self-correcting, for if people and the social institutions they live within come to believe these disciplines, then our social lives will come to look more and more like a confirmation of the picture of human nature that they paint.
In the model, you're not working backwards from the data to the predictions (that's the realm of statistics). Instead, the working backwards is a behavioral assumption as to how people make decisions -- you plan from your death backwards. Let's say that you know that you're going to die in 2050. (This time, assuming you know the exact date you're going to die is _not_ an actual assumption economists make. Here I'm just taking a shortcut in the explanation.) You know that in 2050 if you only make $10,000 a year from Social Security you'll be pretty unhappy, so you try to ensure that you have $10,000 or more in savings. To have that much in savings constrains you in 2049. You can save more in 2049 to meet your target in 2050, but at the same time if you spend too little, you'll be unhappy, so you don't want to save too much. Maybe you want to have $9,000 in savings in 2049, which constrains you in 2048. You work backwards until you reach today, and determines how much you save today.
Of course, there is uncertain about the future. Maybe you'll get cancer in 2043. Maybe in 2050 there's a chance that the price of onions will jump to $5,000 a pound, so you'll have to save extra just in case, since you have an insatiable desire for onions. So you have to work backwards, planning for each contingency. That gives you a plan for today.
This is separate from how you would compare a model to the data. You would write down a model that depends on some parameters (for example, your onion preference parameter). You solve the model for different values of the parameters, and you pick the parameter that best fits the data. So the predictions the model makes about the data are different from the predictions that the people living inside the model make. Inside the model, they actually know the parameters, like you know about your insatiable desire for onions.
49.last: I've seen that argument made before, and I feel like I saw a counter-argument (to some specific empirical claims along those lines) made somewhat persuasively by Kieran Healy. Which is not to say it won't happen, but the idea that financial models, for instance, are self-correcting via that process does not seem like an idea that has robustly proved itself just yet.
50: on what basis is that behavioral assumption made?
51: Hmm, he is saying that the errors are *not* self-correcting, but I think he is using the term differently than you, and it does imply that we begin acting more like the model would predict. I guess I should finish the book to see what he is really saying in detail.
53: I was being inexact, in that what's really self-correcting in this instance is the reality the model is modeling, but anyhow you get what I mean.
It certainly makes intuitive sense to me that wide acceptance of models of social behavior would cause those models to become more accurate in a sort of marginal way, but it also seems intuitively likely that the effect of those models on people's behavior could easily be overwhelmed by other factors (like a financial panic), which makes the usefulness of the premise presumably somewhat limited.
54.2: I think that may be where he's going in the book, we act more like the model would have it, but we really aren't built that way, so it kind of works until suddenly it doesn't in a big, bad way.
But actually reading the rest might be better than trying to predict based on the my current limited data set. I'm not an economist after all!
It certainly makes intuitive sense to me that wide acceptance of models of social behavior would cause those models to become more accurate in a sort of marginal way, but it also seems intuitively likely that the effect of those models on people's behavior could easily be overwhelmed by other factors (like a financial panic), which makes the usefulness of the premise presumably somewhat limited.
There's a (many, actually) direct parallel to this in financial modelling. Part of the reasons Black-Scholes (the formula used for option pricing) was so successful for so long, despite the fact that it ultimately relies on patently false in practice assumptions, was that everyone was using Black-Scholes to price their options.
50: It gives you a parsimonious explanation of how people behave. You just take as data what people like, and what technology allows you to produce, and then you can make definite predictions about everything that happens in the economy. Otherwise, you have to model all kinds of extra things, like how people form expectations about the future, what do people do when prices are wrong and markets don't clear, etc. Also, under some conditions, you can add in explicit learning to the model, and everything still works out.
The alternative is that if you model expectations more directly, then the outcomes of the model becomes completely dominated by expectations, so you can get whatever outcome you want simply by positing certain expectational reactions. (For example, "if you don't enact Paul Ryan's plan tomorrow, no one will want to buy Treasury bonds and the government will go bankrupt.") The assumptions tie your hands so that you can't do that.
I agree with 32.2. Mainstream economists call the complete time-paths they get from solving their models "equilibria". The working backwards tells the model solver what the initial (spot) prices must be for each equilibria.
As I understand, the total quantities and distribution of initial endowments is part of the data - the parameters, if you will - of the model. Suppose that the agents in the model take some time to make the calculations and to coordinate their expectations. And, while they are doing this, they buy and sell. Firms even produce. These actions change the quantities of the initial endowments and their distribution. The equilibrium paths calculated from the original data are then irrelevant.
I find it difficult to describe what I take to be the orthodox theory in any complete way and have it still make sense.
56: that's the specific example that Kieran Healy argued against, if I remember right.
I feel like I saw a counter-argument (to some specific empirical claims along those lines) made somewhat persuasively by Kieran Healy
In the review of An Engine, Not A Camera? The process described by Schwartz is, interestingly, very similar to that described (and endorsed as the height of rationality!) in the individual case by Velleman in several of his works, in a way that (to me) completely misses the point of the criticisms to which he takes himself to be responding.
There's been a lot of really interesting work done on the ways that diffusion models break down in certain circumstances that seems relevant to specific questions about the failure of black-scholes but not particularly relevant to the self-supporting feedback in social models question.
In general it seems like feedback effects are going to be really hard to predict, so predicting whether widespread belief in a model would really lead to greater efficacy for the model is probably complicated as well.
There's nothing like one of these econ-stat-math threads to make me feel like an innumerate moron. Back to the b-school-bashing thread.
Decision Field Theory, Kraab. It's super neat! And the way it works (bounded random walk model) is really similar to Black-Scholes.
It sounds to me as if the assumptions, however idealised or unrealistic, were probably less so than the normal run of the literature. Overlapping generations models are really hairy - think of them as they are, which is basically a massive control engineering problem with respect to a heavily recursive system with substantial positive feedback. And you need to say.something about the overall properties of that system, which is sufficiently specific to be susceptible to empirical testing. You're going to be making a *lot* of simplifying assumptions about that system.
The state of the art when I gave up economics involved models in which the different agents were so homogeneous that it was very debatable whether they could be considered distinct from one another. Even then the models were hairy, because the fact that there were separate overlapping generatons introduced a tiny little bit of heterogeneity in the endowments of labour and capital. Even in perfect information contexts, they could typically only be solved as quite tricky dynamic programming models.
Obviously, as you start trying to use more general assumptions, you have to use more difficult mathematical techniques[1]. So it's more like "assume that homoeconomicus is a very very slightly irregular spheroid at this stage". It's actually very interesting stuff and a lot of people who went deeply into it seemed to come out with a really good sense for the economy (a bunch of guys at the Bank of England were heavily into these models at one time), albeit that the actual models were a bit of a washout and I personally was never convinced that the insights gained couldnt have been got by doing economics instead. It's one of those things like modern analytical metaphysics that probably ought to be allowed and even funded but whose practitioners need to be kept on a tight rein because they are very prone to giving themselves intellectual airs and graces.
[1] Or computational, "agent based" simulations that always look very interesting but never seem to go anywhere, basically because the modeller can never really explain *why* he got the result he did without building a map slightly larger than the territory.
64.[1]: it seems like those could be (more) usefully explanatory, and it's not like any of these are ever going to be usefully predictive, right?
64.[1]: it seems like those could be (more) usefully explanatory
It does, but then what happens is "Ooh, look! Pictures! The little agents are making houses! That looks like a house, right?"
It's one of those things like modern analytical metaphysics that probably ought to be allowed and even funded but whose practitioners need to be kept on a tight rein because they are very prone to giving themselves intellectual airs and graces.
Yeah, it's not like doing this work isn't really tough--but perhaps best confined to small seminar rooms and pleasant hotels.
Part of the reasons Black-Scholes (the formula used for option pricing) was so successful for so long, despite the fact that it ultimately relies on patently false in practice assumptions, was that everyone was using Black-Scholes to price their options.
This is Donald MacKenzie's stuff. Related: The Scottish Verdict.
In principle they keep looking that way - Paul Krugman went down a two year blind alley iirc - but the trouble is that the models themselves become combinatorially difficult quiite quickly and they have too many degrees of freedom - you can get more or less any stylised facts you are looking for out of them by stepwise tinkering with no pointers from the models themselves about what form of parameter-tinkering is valid or invalid. There have been a few successes - there was a Santa Fe workshop that proved to my satisfaction that bubbles and crashes are very general properties of almost any kind of securities market, however it is set up institutionally. But in general, one tends to end up feeling a bit silly when all one's really got to show for a year's work is the replacement of unrealistic assumptions about real people with unrealistic assumptions about imaginary people. It's just an intrinsically very difficult problem, which is why nobody really knows which methods are going to end up working best - it's enttirely possible that simulation methods will end up delivering some of the goods but given that it will require very very big advances in the field from where it was when I stopped reading it, I at least am not ready to give up on the massive advantages of parametric modelling yet.
67: okay, that makes sense. I feel like that back-and-forth tension happens in any field with a lot of nonlinear dynamics mucking up the picture (I mean, I guess I'm thinking of fluid dynamics mostly, but I know it also happens in computational neuroscience).
Btw, McKenzie's book is great, but it would be a shame if that was the lesson everyone took away from it. The reason Black & Scholes caught on is that it is actually a very good model, much more robust to peturbations of its assumptions than hack finance professors insinuate, and that delta-hedging a portfolio (or replicating a put option through portfolio insurance) is actually a very effective technique. Ed Thorp has made at least one of his several fortunes out of being aware that for the vast majority of the time, something like the plain Black/Scholes model (specifically, the version of it thathe independently invented) is the way to go. The trick is knowing when you've reached one of the points when vanilla Black/Scholes won't work (and usually nothing else will either), but then did you really expect that making a killing in the options market was going to be easy?
70: I find it really interesting that it is so similar to decision models in neuroscience; I feel like there's something very smart to be said about option pricing using black-scholes modeling the aggregate behavior of idealized options traders in what is actually a somewhat biologically plausible way, but I also suspect I'm not going to be the person to say it.
I wouldn't go down that route if I were you. To be a bit technical for a second, the Big Idea of Black/Scholes IMO is that the stochastic process for the fair value of the option is a filtration of the stochastic process for the price of the stock. That's what lies at the bottom of it - that in all "ergodic" circumstances where the history of the stock price is in any way informative about the nature of the underlying stochastic process, there are quite tightbpunds on what it is reasonable to belive about the fair value of the option. That's basically ensured by Ito's Lemma, which is really very general, so what your agent model is going to end up giving you is Black/Scholes, plus an agent based theory of the stock price process.
Btw, McKenzie's book is great, but it would be a shame if that was the lesson everyone took away from it. The reason Black & Scholes caught on is that it is actually a very good model
What's frustrating about MacKenzie is that (a) he scrupulously gives you all the details about how much old-fashioned politicking and legwork was needed to get a market like this made legal to begin with (Hire Friedman to write something! Harass the regulators! Etc), while not making a big deal of that in the framing because it's not sexy; and (b) is incredibly judicious about what exactly he can and can't prove about the self-fulfilling side of BSM, but because the performativity stuff is so cool-sounding he shies away from figuring out how to deal with the possibility that those guys were onto something give that the model performed so well (in the old-fashioned sense).
But I could be wrong, and on the specific terminology, twenty years of late nights and scotch whisky pretty much ensures I am. My copy of John C Hull's textbook not only still has Deutschmarks in it, it predates the general realisation that the use of Black/Scholes in the LIBOR options market wasn't a market convention, it was the right thing to do!
Actually, free tip for any Professors of Sociology at Edinburgh who happen to be reading - there is a really nice paper to be done in basically producing a variorum edition of Hull's "Options, Futures and Other Derivative Securities" and using it to chart the development of things like the LIBOR market model.
but the trouble is that the models themselves become combinatorially difficult quiite quickly and they have too many degrees of freedom - you can get more or less any stylised facts you are looking for out of them by stepwise tinkering with no pointers from the models themselves about what form of parameter-tinkering is valid or invalid.
Couldn't the empirical data constrain the parameters, at least a bit? It seems to my untutored eyes that you could get quite a lot of useful modelling work done out of a simulation which, rather than assuming lots of homines economici, started from a smorgasbord of homines more-or-less-realistici and just waited to see what happened. It wouldn't result in a neat formula, but it might more closely resemble reality, especially in tail-event scenarios.
The trick is knowing when you've reached one of the points when vanilla Black/Scholes won't work (and usually nothing else will either), but then did you really expect that making a killing in the options market was going to be easy?
According to classical economics, making a killing in any market is impossible, because the killings get arbitraged away.
72: oh, I gather that my pet theory doesn't describe any of the ideas that went into developing black-sholes, but if they turn out to be meaningfully homologous then it seems possible that efforts to address failures of the diffusion model to predict decision making at the individual level (i.e., what happens when the monkeys are hungry) could say interesting things about failures of black-sholes under conditions of panic.
This is Donald MacKenzie's stuff. Related: The Scottish Verdict.
There's still hope, Sifu. Black-Scholes breaks down where there are jumps in the stock price (the delta-hedging strategy breaks down), so maybe you can then predict how the option price will react to the stock price jump by using your pet theory.
79: well, that's sort of what I was wondering about; if what you're really predicting with black-sholes pricing is the internal decision models of a whole aggregated population of options traders, and their internal decision thresholds are based on steadliy accumulating evidence, if something (stock price discontinuity, general market panic) pops them collectively out of that heuristic, how will they behave, and is that predicted in any way from the way the individual decision model breaks down?
80: Run!
(Just using that as an illustration of a decision an individual might make using internal decision thresholds in conjunction with steadily accumulating evidence.)
#80 is what I'm getting at though in saying that what you end up with is Black-Scholes, plus a computational model of the underlying.
Black-Scholes doesn't really "break down" with respect to discontinuous jumps (proof - lattice models exist and are recognisably discretised versions of Black-Scholes. You can even incorporate jumps into a Black-Scholes framework by changing the estimated volatility factor and reducing the frequency of your hedge rebalancing, as long as the jumps are reasonably statistically well-behaved.
The problems come, and hedging breaks down as Walt says, when the jumps *aren't* well-behaved. But that's not really a problem with the Black-Scholes model - it's just that it's actually difficult to hedge or price something if it's prone to really really unpredictable jumps! It's almost (but not quite, at anything beyond an utterly trite level) like saying that calculus "breaks down" at corners.
The "Black Scholes model" that very sad people have embroidered onto their golfing shirts doesn't work very well for much - the existence of the volatility smile shows that. But IMO the true "Black Scholes equation" is the differential equation, and something like this tends to show up in basically every "improvement" on Black-Scholes the literature has ever seen, and this is because it's that SDE that is the central insight of Black (and Merton, and to an extent, Scholes).
#80 is what I'm getting at though in saying that what you end up with is Black-Scholes, plus a computational model of the underlying.
Yeah, that's what I would sort of hope would happen.
But that's not really a problem with the Black-Scholes model - it's just that it's actually difficult to hedge or price something if it's prone to really really unpredictable jumps!
Oh, I understand. There's just been interesting work in terms of individual decision theory in looking at how diffusion models break down under conditions of increased uncertainty (in the one I'm thinking of, you can model the monkey's hunger and unhappiness by pulling in the decision bounds parametrically), and I wonder if those have correlates in stochastic option pricing models.
You probably want to look up that Santa Fe workshop from the 1990s I was talking about, but I've lost the bookmark three browers ago I'm afraid.
It's fun to stay at the T-C-B-Y!
It's fun to stay at the T-C-B-Y!
They have everything that you need to enjoy,
Whipped cream and sprinkles with all the boys!
85: With enough hungry monkeys and you will produce and accurate prediction of any market.
Not only did the monkeys produce nothing but five pages consisting largely of the letter S, the lead male began by bashing the keyboard with a stone, and the monkeys continued by urinating and defecating on it. Phillips said that the artist-funded project was primarily performance art, and they had learned "an awful lot" from it. He concluded that monkeys "are not random generators. They're more complex than that. ... They were quite interested in the screen, and they saw that when they typed a letter, something happened. There was a level of intention there."
Come on guys. Who doesn't love frozen yogurt?
NOBODY! THAT'S WHO!
88: well, right, that was my other idea: start a monkey hedge fund. Which kinda could work about as well as anything, if it weren't for the smell.
88: But I'm not sure they would reproduce the exact errors in that comment.
Walt Kelley had a pretty good riff on the random monkey/chimp stuff in His Jack Acid Society Black Book.