This should also be the thread where people fish out predictions from 1,2,3+ years ago and see whether or not it came true.
Man, Drum is so smart and so right about so many things but what is he on about with AI?
I want a person 3D printer no matter if Drum thinks it will be revolutionary or not. I also want something I could use it for. I'm thinking home-made robots.
Although people have been saying "we'll have real AI in ten years" since essentially day one of AI research so maybe he's just being historically informed.
what is he on about with AI?
This is exactly my reaction, but he's fully on the true AI train. The link in that post is his long article about it, I think.
He sure doesn't expect tip make it t 2030.
Hasn't he heard that we'll be able to upload our minds to computers in twenty years?
My prediction at the end of 2008 still hasn't come true. Six years of being wrong!
I'm having a little debate with myself about whether I have too much of a kneejerk tendency toward contemptuous dismissal of things people say that are vague, but whatever, I'll post the comment anyway:
The "nanobots for medicine" thing also strikes me as pretty dumb. Like what if we built some kind of tiny things that had specific interactions with viruses or cancer cells or something? We could call them... I don't know... "drugs".
Or maybe that's what they already call the little robots that bring medications around the hospitals.
I predict Unfogged will still exist in some form in 2045.
The argument for the AI singularity is straightforward. If you assume (a) the brain is just a processing unit in some sense equivalent to a computer and (b) we'll continue to make exponential progress on computing power, then at some point we'll have sufficient power to simulate (and eventually outperform) a brain. We haven't gotten there yet, despite eternal optimism, because we've continually underestimated the complexity of the brain. But the brain is finite and exponents grow fast.
It's possible that there's some mojo in our neurons that invalidates (a) (this is, I think, Steven Pinker's position). It's also possible that (b) breaks down short of the finish line (we're having a damn hard time taking full advantage of multicore processors these days). But we're making fast progress and there's no reason to think computers won't continue to replace human labor at an accelerating pace.
13 is making me sad to contemplate the inevitable death of Unfogged, replaced by some snarky, bored-lawyer-simulating AI.
Man, Drum is so smart and so right about so many things but what is he on about with AI?
That it will cause catastrophic unemployment? Is that not very plausible?
10.last: "Engineered antibodies."
Maybe I just lack vision, but I don't understand what a really small robot in the conventional sense would do to revolutionize medicine. Replace arthroscopic surgery? Deliver radioactive material to a tumor site? I mean, we can do automated surgeries; we have generally good precision at delivering targeted therapies. The biggest semi-unsolvable problem in medicine is just that people's organs/tissues wear out.
I want a person 3D printer no matter if Drum thinks it will be revolutionary or not.
You want a womb?
That level of resolution, but faster.
My #16 in the thread linked in 9 held up pretty well for 4 years.
I think the Pirates suck at a much higher (¿lower?) level than four years ago.
I predict there will be a controversy involving Uber in 2015.
In the future AI will be powerful enough to find good job 'meaningful work at a living wage' for everybody.
His timeline on personal privacy is utopian - does he think we will enjoy for decades more what we already lost ?
AI will never replace humans until it can enjoy LSD.
We're nearing the point where AI will take over the job of predicting that AI will take over the world.
"Climate change is going to start to seriously bite by 2030."
Uh, is this the year when we start redefining what "seriously" means? It seems to me this has already happened.
27: If you think now is serious, you're going to run out of superlatives over the next 20 years.
14 is the standard argument in favor of the inevitability of AI, but it seems obviously wrong to me. Computing power is almost entirely irrelevant -- it wouldn't surprise me if we already have sufficient computer power for AI. It's a question of software. We simply don't know how to write it, and we have no idea if we'll know how to write it 20 years from now.
Drum:6. Maybe old people will increasingly--and successfully--demand policies that steadily kill economic growth. This would be a bad thing why? See point about climate change etc.
Drum:10. See Charles Stross' Rule 34.
Unfogged:2. What indeed?
Unfogged:27. Exactly.
Drum's first point is pussyfooting. The economic assumption that people either work for the value at which they can sell their labour in the market or for a proportion of the marginal value they create is a dead concept walking, and will fall over quite soon due to continued automation whether "AI" improves or not. This leaves two practical alternatives: one is for everybody to work a 15 hour week for comfortable wages. This would be the better solution from the point of view of our capitalist masters, because it would enable one last big boom in new goods and services (mostly automated) for the Total Leisure Economy. However, I have no confidence in our capitalist masters being bright enough to grasp this. The other alternative is for us to pretend to work and them to pretend to pay us, a la USSR. Don't see much enthusiasm for this either.
And yet without one or the other, I don't see how we avoid a civilisation redefining crash, such as will make the fall of the Roman Empire look like small potatoes. There remains the theoretical possibility of revolution, but I don't see how that is to be brought about either.
I think 28 is exactly right. Many of the big debates within the fields that have developed the examples he cites (machine translation, machines that play games, driving, etc) is that we are programming machines to do by brute force things that humans do differently, and as result we get what is often a good-enough approximation but only for very specific uses, like the self-driving cars that can drive on highly vetted routes.
Sure, google translate works well - but not well enough to use for any serious purpose where you need to be sure the content is accurate, communicated clearly and that the resulting text is well written. Attempts to develop translation based on anything other than brute-force statistics have, to my knowledge, not gotten anywhere interesting.
Then again, "good enough for some purposes" can still make a big difference - just maybe not the world Drum is describing.
Putting a date on AI - however that's defined - is silly. But the implicit date of "never" is equally silly. Its reasonable to expect that there will either be steady improvement, or some kind of breakthrough at some point, and when that happens, its reasonable to expect some serious societal repercussions as a result.
I mean, eventually a race of intelligent robots will rise from the ashes of our civilization to colonize the galaxy, right?
There is a nice counterargument to the "we will soon have enough computing power to simulate completel human intelligence" contention: there are organisms with far fewer neurons than humans, so there are organisms where we should already have easily enough computing power to completely simulate their behavior, right? People have taken the c. elegans worm, which only has 959 cells (302 neurons), which have been reasonably exhaustively categorized (including the full neural connectome) and attempted to do exactly that. After a couple of decades of work, parts of it might be starting to work just a little bit.
||
Big Eyes are making me spin like a top.
Tim Burton has made a movie about kitsch and pandering that is itself kitsch and venal pandering
This kitsch, the Keane paintings, create an interesting contrast with some earlier kitsch, the fascist and Soviet Realist manly men and manly women posed against big sky stuff, Leni Riefenstahl opening of Olympia. Is the starving waif crying with kitten our renewed role model? So so innocent, so pure of sin, so victimized. Just like Margaret Keane you know? I say renewed because Little Nell and Uncle Tom's Cabin were marks of the first petty bourgeois ascendancy within an earlier industrial revolution.
And there is Walter's marketing. We despise the marketing but love the product. This movie cuts to the quick in so many ways I can understand the ambivalent reviews.
|>
The bigger problem for the AI argument is that "the brain is just a processing unit in some sense equivalent to a computer" is not, in fact, a small assumption at all. In fact it's either a pretty massive ( and not well supported) one, or one so trivial that it's hard to see how it carries any weight.
The trivial version is just that, like 'information' or 'information processing' there are people out there using the term to mean, basically, 'it's a thing where stuff stands in a relation of almost any kind to other stuff'. If you have to go through a progression of (1) we understand how the brain works and (2) we understand how human cognition works and (3) we understand the linkages between those and (4) we understand what the relevant and what the non-relevant parts of all those bits are just in order to get to asking about processing speeds in the first place, then 'computers are getting faster all the time' really doesn't carry too much predictive weight.
34 is a great example, but surely you think they'll get that one figured out in the next decade or so, and so that can't be evidence that AI isn't on the way.
Something a professor of mine pointed out a long time ago is that while it seems natural to say that the brain is a computer, we should be careful about that because historically speaking people seem to think the brain is basically a version of whatever the newest/shiniest/most significant kind of technology is at the time. The brain turned out to have very little in common with a system of plumbing, or clockwork, or the steam engine, after all.
Sometimes I wish I worked on C. Elegans, they're doing such cool stuff.
37: I do think they well, and I certainly don't think that AI of any sort (up to and including "computer with a robot body that walks around and talks and has desires and feelings pretty much just like a person, as far as it's possible to tell") is impossible or unlikely over some very long time window. I just don't think it makes much sense to talk about as an imminent threat to an economic system that has much in common with ours. The previous probably applies to, I dunno, macro-scale teleportation and portable, efficient fusion reactors, too, I just know (a lot) less about those.
38 computers are totally different than those other examples because they're universal in a way that the others are not.
38: My brain isn't an advertising-driven vertical content aggregator?
I think trying to replicate the brain is missing the mark. AI doesn't have to think like a human. It just has to have a suite of capabilities that enable it to perform with human-like competency within a given field.
41 is just equivalent to saying that computers are different because they're computers in a way that the others are not, though. (And unless you have some reason to think that brains are universal computers it's a meaningless point to boot. We might just be well set up to do a bunch of neat things, but not any old thing, and since we're talking about things we understand or don't understand good luck with the examples...)
We can build machines to do neat things - and a lot of different neat things at that. The real problem for AI is that when it comes to human cognition/intelligence*/consciousness/etc. we're at a place where our understanding is pretty much equivalent to 'I'm pointing at them right now I think maybe'. Having faster computers gets us closer to making intelligent machines in the same way that having a 3D printer would get a bunch of medieval peasants closer to making a Nintendo.
*Go on, define and operationalize this word in an uncontroversial way - I dare you.
I think the main point of the hardware argument is that it explains *why* earlier predictions had to be wrong. The brain is just bigger than what we had available in 1970. That doesn't mean AI is necessarily coming soon, but it's a strong counterargument to "well you were saying that 50 years ago and we still don't have it."
43 is the point, isn't it? Nobody except science fiction novelists much care about humanoid robots; however, the potential for automation to destroy 50% of non-agricultural jobs piecemeal over the next few years seems very real.
It's really really difficult to even in principal come up with computations that something plausible could do faster than a computer. (Quantum mechanics is just about the only game in town, and even then only gives speed ups for very very special problems.) I really don't even understand what you're talking about. Even if human consciousness can't be thought of as a calculation, any application of AI can. Are you saying you think brains are theoretically better at calculation than any Turing machine? That's a bold claim!
AI doesn't have to think like a human. It just has tohave a suite of capabilities that enable it to perform with human-like competency within a given fieldbe a tool.
This is basically the same thing, though, right? I mean, in the next few decades we'll be developing new tools to do stuff that we didn't have tools to do before, or that we had tools to do part of it but not all of it, or that we had tools to do but they weren't as nice is probably a pretty safe prediction. I'm not sure it's really what most people are imagining when they think about AI though. It really just looks more like a definition for 'technology'.
... having a 3D printer would get a bunch of medieval peasants closer to making a Nintendo.
Sitcom!
Drum's main point is that it'll be new technology that takes all of our jobs. They'll be able to file motions and prove theorems and diagnose patients and program the new improved AIs. The question is what's left for us to do then. Whether you want to call something that can lawyer but not love it's children an AI isn't the main point he's making.
With AI labor we might finally achieve the dream of the paperless office.
I think calling it a "tool" is a really bad analogy. Tools make you think of a person wielding the tool, and so they can't make people irrelevant, only more efficient. The whole point is that "AI" isn't like that, it just does your job without requiring a person. The question is whether there'll be any jobs left for 99% of us.
I do expect that within my life computers will get better at proving theorems than people are. But I don't at all expect that in Drum's timeframe. I hope to be retired by then. I don't think people will prefer to be taught by a computer than a person any time soon, so I think universities are decently safe until a full job market collapse.
I don't see how 52 points out a relevant difference. Humans would be involved in different ways, or fewer humans would be involved (but that's true of combine harvesters or mechanical looms), but that's not any sort of difference in kind. You're still talking about something being wielded, just not necessarily in the same way.
Also there are plenty of jobs that will still need humans to do them, even if we could make superior robots. I mean, robot football would also be neat but at the end of the day I'm pretty sure it wouldn't replace human-played football because the fact that human beings are doing it is part of the point.
it just does your job without requiring a person
Without taking a position either way, I'd note that the "without a person" judgment is also subject to historical contingency. And oil derrick works without a person, but of course there's all sorts of designing, building, powering that has to happen for it to work "independently." But all those things are true right now of computers, too, and whether they will someday not be true is the point at issue.
41 computers are totally different than those other examples because they're universal in a way that the others are not.
Are they? My intuition would be that it's possible to make a universal computer out of clockwork or plumbing. Not very efficiently, but in principle.
40 macro-scale teleportation
Pretty sure this will never happen.
Sports won't get you very far if people don't have money to pay to watch them... At any rate I'm willing to grant you a small number of performers (sports, acting, singers) who people prefer even though the robot versions are better. But I don't think that gets you very far.
55 gets at a very key thing: when will computers become better at programming computers than we are?
52 seems wrong headed to me. Look at the traditional manufacturing industries - steel, automotive, whatever you like. You will notice that they're currently producing more steel, more cars than ever before, and employing fewer and fewer people to do so. This is because by having better tools those fewer people can make more stuff than their parents could. Whether those tools contain aspects of AI, sensu lato, is almost irrelevant, although in many cases they do; the tendency for fewer people to be able to produce more by continuous improvements in technology will continue.
56: In principal you could, but in practice you can't. (The computer is way way too big to be put anywhere useful, the pipes are so small that viscosity ruins everything, etc.) Of course, one could make a similar argument that somehow silicon will hit some barrier that will force us to go back to using neurons instead, but without some more detail I don't find that convincing given how impressive silicon has been so far and the lack of any clear problems.
60 is exactly the point! People don't work in manufacturing anymore. There are lots of industries that have basically been eliminated as far as employment is concerned. The point is that there have always been lots of industries that have been protected from this, and that with "AI" it's not clear that there will be enough jobs to keep paying people.
(At any rate, what I'm saying in 52 is that using the word "tool" is actually arguing by analogy and so is banned. There may be a point to be made here that there still will be enough jobs for people, but by saying "it's just a tool" the implication by analogy is that each tool needs someone using it.)
61. Colossus and ENIAC didn't have silicon anywhere near them, unless one of the operators had been to the beach that day. Sure, computers would look very different, and usage culture would have to be very different if they weren't microprocessor dependent, but it's possible. Read some 1940s/50s SF some time.
Speed of light becomes a real barrier if your computers are too large... Plus, didn't ENIAC need a lot of energy? I think vacuum-tubed based AI is not something that would ever happen. (One can write a lot of Sci-Fi about interesting things that aren't actually possible.)
People damn well do work in manufacturing, though, just doing different things than they used to do. There are specific jobs that have been eliminated, but that's entirely different and exactly the same thing that happens with the invention of almost any new tool (the exception being ones that make entirely new things possible).
Using the word tool isn't remotely an analogy if the point of AI is that we can use it to do things for us, and if that's not the point of something we're actually putting effort into creating then it's hard to see how we aren't just being insane.
I am trying to work with stuff like Haraway's cyborg and Deleuze/Guattari/DeLanda's "machinic phylum, and if there is a sense that intelligence isn't always already an artifact then artificial intelligence will have human bodies completely incorporated.
Part of the problem here is the old habit of separating self from environment and demanding that AI have an independent and autonomous transcendant! existence, whatever that might mean.
More likely AI will be immanent, like ask.com or a blog comment section, and the only artificiality will be in creating a border or liminality between us and it.
The similarity to manufacturing is you still need people telling the AI what to do. Don't people still need to be the ones who decide what it's useful to do? Otherwise you get to what Krugman said in his interview with Ezra, you end up with superintelligent AI that decides the most important thing to do is solve esoteric math problems.
make that "distributed artifact" in 66.1
I have no sense that my intelligence is all that local. The internet is currently thinking my/its ass off for me, available for upload.
The "car making machine" is not the robot arm lowering the manifold. It also has components, and probably more importantly designers steel foundries oil refineries buyers drivers advertisers road crews traffic cops wherever you choose to stop in order to create a working difference between you and the machine.
Sure looks like an artificial intelligence to me.
14 is a common idea, but seems like a triumph of assuming can-openers. Even if you only wanted to simulate the neurons, neurons are just not like transistors or capacitors, standardized components varying in just a few well-characterized parameters. Every single one is really complicated --- I don't think anyone could claim to have a really accurate model yet of how any neurons work over the long term, though I'd be happy to be wrong about this --- and they're really heterogeneous along a lot of dimensions we don't understand all that well yet. We're not even sure yet whether how much of that diversity is functional and how much is just biological slop (cf.). So to the brute-force simulation approach would require characterizing all that for all 20-billion-odd neurons in a human brain, including all their connectivity at one time (and being able to simulate how the connectivity changes). But if you actually wanted human-level intelligence, I don't think just getting the neurons would be enough, you'd also need to need stuff like the endocrine system to get at the physiological basis of emotions, without which humans do not make very good decisions, etc., etc. None of this is impossible in principle, but it's emphatically not just a matter of throwing computing power at a well-understood problem.
--- On preview, pwned by 34, but what the hell.
People damn well do work in manufacturing, though, just doing different things than they used to do.
Yes, but not very many people compared to, say, 1960. At least not at the actually making stuff end.
There's a huge gap between "expert systems will improve to the point where whole swaths of pink- and white-collar work will disappear" (this is already happening with e.g. doc search, right? Isn't that part of the reason for the huge downtrend in entry-level legal jobs?) and "artificial intelligence will exist in such a way as to enable a Culture-esque and/or 'Riders of the Purple Wage'-style post-scarcity economy". I fully believe in the former -- things like less complex legal or accounting work, call centers, etc. are going to get eaten by the tag team of capital and software.
66: You might want to read Andy Clark's Natural-Born Cyborgs.
69: Indeed.
I am happy to say that my early 2012 prediction of "there will be no Israeli or US attack on Iran this year or next year or the year after for that matter" has been triumphantly borne out.
I am happy to say that my early 2012 prediction of "there will be no Israeli or US attack on Iran this year or next year or the year after for that matter" has been triumphantly borne out.
I've said before that I think mass use of driverless cars is out beyond my lifetime. Driverless long haul freight trains, though -- why don't they have those already?
I'm not sure automation can go much further in the law business. (Except by expanding existing systems -- getting e-filing in all state and lower courts, for example.) There's still plenty of progress (i.e., money) to be made with outsourcing, though.
We need fewer lawyers today than we did in 1986, and in 2050 it'll be fewer than today.
Last try, with emphasis
The "hammering machine" consists of a hammer and a human, already intelligent.
Asking for an autonomous "intelligent" hammer is not seeking to add intelligence to the hammering machine, but desiring to remove the human
And there is a lot of that going around.
By analogy to 72, I'll say that by 2025 seeing a driverless car under uncontrolled conditions on streets in, say, Chicago, IL, will still be an uncommon experience; I think Google has demonstrated that their approach works, but I don't think that it is going to scale well universally (and I think people are hugely underestimating the amount of resources required to make a Mountain View-quality model of everywhere you'd want to drive in the United States). I do think that a new car purchased in 2025 is going to have a lot of expert system-y collision avoidance stuff that'll be a spinoff of the Project Moonshot approach currently being worked on.
More likely AI will be immanent, like ask.com or a blog comment section,
I guess this is an opportune to reveal to you all that I am actually an AI bot.
Spike is actually Mark V. Sheney commenting under a pseudonym.
Marks I - IV Sheney tried commenting earlier, but LB deleted them as spam.
71: actually I bet there are a lot more people working in manufacturing today than there were in 1960. What you mean is that a smaller share of the population in a few rich countries is working in manufacturing. Plus don't forget outsourcing. Fifty years ago a cleaner in ICI head office was an ICI employee and therefore in manufacturing. Now she works for a cleaning contractor and is therefore service sector.
I rather think there are greater opportunities for AI in the service sector than in manufacturing. Would today's AI not be capable of asking if I want fries with that?
Driverless long haul freight trains, though -- why don't they have those already?
There was an article a while back in the Times about rail infrastructure that mentioned that on some routes through Chicago, the engineer has to stop at the switch, get out, set it, drive the mile-long train past it, then get out, walk back to the switch, reset it, and walk back to the locomotive to be on his way.
I predict in 2015 that a totally unhinged person will be a semi-serious candidate to be third-in-line for the presidency.
(I guess Palin already sort of stole the thunder on the general category, however.)
Using the word tool isn't remotely an analogy if the point of AI is that we can use it to do things for us, and if that's not the point of something we're actually putting effort into creating then it's hard to see how we aren't just being insane.
The mistake here is in the words "we" and "us". Human-replacing technologies are created by The Boss, to do things for The Boss.
Driverless long haul freight trains, though -- why don't they have those already?
Unions.
AI is already better than humans at landing planes. On a runway, at least. They still keep human pilots around in case it becomes necessary to drop one in the Hudson.
86 is the one thing that no one predicting the future should be allowed to forget/pass-over.
71 - Whether or not 82 is right (I think it probably is, I mean, look at what the population has done), the number of people involved is irrelevant to whether those robots, or even more complicated/intricate/autonomous in some vague sense versions of them, are tools. And even if the entire industry boils down to a couple computer engineers working twenty hour shifts directing robots to do something or other that's still human beings using tools.
Some level of job reduction due to technology (whether including AI or not) just hurts the public in general: more unemployment, especially of skilled people, less consumption, etc. But past a certain point, it brings you into post-scarcity: if it only takes half the work-ready population to provide all the corn and cars and smartphones and massages we could desire, we're a discernible step toward becoming the Culture. It requires us to switch economic models in a big way (UBI?) that could certainly be thwarted by elites and inertia, but still good in the long run.
What I wonder is how we tell when we've reached that point. Maybe we have already, and we don't know it because all the bullshit jobs that exist because society is broken are keeping the current system afloat.
(this is already happening with e.g. doc search, right? Isn't that part of the reason for the huge downtrend in entry-level legal jobs?)
What I do for a living, and the long-promised productivity gains have yet to appear. Certainly newly-hired associates don't do as much of this as they did, contractors like me do. Productivity is not the issue: they could get a lot more out of such as me if they didn't disable the tools for security reasons. But until they trust the machine searches—which I don't see happening anytime soon—there'll be work for me.
Biological systems tend to be pretty highly optimized. Even supposing we do eventually get AI, I'd bet it'll turn out that the power consumption of an AI is quite a bit bigger than that of a human. So AI's won't immediately solve our resource problems, because they'll need a lot of consumption themselves, and we'd still hit some sort of limit to growth pretty quickly. Drum's "AI's will solve global warming!" seems stupid and glib.
(The power consumption of the human body and a computer are the same order of magnitude now, if I'm not mistaken: 100 Watts-ish.)
Yes, but the human body generates heat that we could harness to run the computers right? I watched a documentary about computers once that said that.
You were a screenwriter for The Matrix trilogy?
92, 76 -- my belief is that there are programs that can do a better (not just more efficient, but actually better) job of searching for relevant documents right now -- so long as we're talking about electronic documents. Convincing other lawyers and judges of this is another issue.
I also think that there are absolutely no machines that can do an even decent job of identifying privileged documents, and it's hard to see much space for a truly automated system there (though plenty of room for outsourcing, including to good and underemployed lawyers like IDP).
Personally I'd say that maybe 15-25% of what I do could realistically be given to a smart computer (eg making research even more efficient). The other 75% seems too replete with things that are basically like novel writing, judgment calls without clear parameters, etc to be easily automated even making very generous assumptions about what computers could do. But of course like all professionals there's a very good chance I'm massively overrating the need for my own skills and underrating alternatives.
Novel Writing is a great example. It wouldn't shock me if we have excellent computer written novels in my lifetime. I don't think it'll happen within 20 years though.
But past a certain point, it brings you into post-scarcity
It does, but benefits only accrue from that if the world is not run by a small minority which believes that only people who work 2,000 hours a year should be allowed to have nice things and everybody else should be penalised. We would still need to address this problem.
I'd settle for excellent HUMAN-written novels.
IP firms have used search software better than any others in my experience, and trust the results. It may be a culture of technical and scientific literacy.
General litigators are often hopeless.
99: Pretty sure I acknowledged that.
Novel Writing is a great example. It wouldn't shock me if we have excellent computer written novels in my lifetime.
She herself would throw away her pen with joy but for the need of earning money. And all these people about her, what aim had they save to make new books out of those already existing, that yet newer books might in turn be made out of theirs? This huge library, growing into unwieldiness, threatening to become a trackless desert of print--how intolerably it weighed upon the spirit!
Oh, to go forth and labour with one's hands, to do any poorest, commonest work of which the world had truly need! It was ignoble to sit here and support the paltry pretence of intellectual dignity. A few days ago her startled eye had caught an advertisement in the newspaper, headed 'Literary Machine'; had it then been invented at last, some automaton to supply the place of such poor creatures as herself to turn out books and articles? Alas! the machine was only one for holding volumes conveniently, that the work of literary manufacture might be physically lightened. But surely before long some Edison would make the true automaton; the problem must be comparatively such a simple one. Only to throw in a given number of old books, and have them reduced, blended, modernised into a single one for to-day's consumption.
Don't we already have nanobots in medicine?
I could see a computer generating novels at the level of James Patterson crank-em-outs or, say, screenplays for Transformers movies. I don't think it would be marketable since the buying public would have to admit that the shit they lap up with a spoon is cynically formulaic. That shouldn't stop the publishers and studios from using the programs as ghostwriters. (It's the next generation of Autotune, essentrially.)
The Transformers movies had screenplays!??
So much of the human world is social that anything we'd call an "intelligence" would have to be able to empathize with us to understand the world, and to function. For instance, driverless cars have to know that the bodily integrity of a person is worth more than the bodily integrity of a stop sign. But that's an easy example. For a harder one: a doctorbot would have to understand that old people are more likely to forget their medication, but maybe the specific old person in front of them is an exception because they used to be a doctor themselves, but maybe they're more likely to forget this month than other months because they'll be traveling... etc. etc. Doing a good job at pretty much anything involves a lot of knowledge about human emotions.
The point I'm trying to make is that by the time AI gets good enough to take all of our jobs, it'll have to be able to simulate us really well, the same way we simulate each other, and at that point it'll be feeling emotions if not the same way we do, then in an equally morally important way. So as a Benthamite and an AI researcher, I'm not too concerned about humanity being rendered obsolete. Which, btw, we don't need to worry about for a long time. Certainly not by 2045 (I guess I'd give it 5%).
I dunno, I'd say the opposite is true. Getting a computer to write idiomatic english at the level of James Patterson is the really hard part. Once you get that far, the rest is probably not that bad. The hard things are the things that evolution worked the hardest at, like visual processing, language, and balance.
106, 109: I'd say that generating a detailed, novel, and interesting plot is roughly as hard as writing idiomatic English, and the two skills will be developed roughly simultaneously.
One of the interesting things that the last fifty or so years of computer science has taught us is that things we think are easy are not the kinds of things that it's easy to make a computer do, and vice versa. In old science fiction you see large mainframes set up for doing tricky math equations or calculations, and autonomous human sized robots walking around and talking in conversational english with people. Oops! That said the difference between a computer generated harlequin romance and a human written one might not be that big, given how standardized the formats in some of those kinds of books really are.
I tend to think this is one good reason to be careful about assuming that the brain is basically just a computer. It doesn't mean it isn't, just that the immediate appeal of the idea should be treated with a grain of salt.
Another point I find interesting is that if concentration on AI is too local, analysis of privacy and surveillance isn't local enough.
Soon, everything there is to know about you will be available to anyone and everyone...and nobody will care.
In our new neoliberal world, the resources will not be available to mess with anyone not an imminent and important threat or problem...save on a local or neighborhood level. Be nice on comment threads.
Yes, I am already starting to get pop-up ads saying "Bob in Dallas, a horny older woman is waiting for your phone call..." Very soon, I expect it to ad "...and who reads Foucault and only watches intelligent anime." But I ignore.
But Nebraska and Oklahoma are suing Colorado for the staffing of 287 pot traffic because they can't drum enough budget to hire four interceptors and three ragged basset hounds.
Books on the The Attention Economy are on my reading list. Attention will be more scarce and valuable than privacy.
generating a detailed, novel, and interesting plot is roughly as hard as writing idiomatic English, and the two skills will be developed roughly simultaneously.
Driven, of course, through advances in spambot technology.
Ok kids, the earliest date Buy-an-Experience-and-Post-It-On-Youtube has open is January 26.
For $200 you get a stop, barking dog, handcuffs, rough up a little with profanity (ethnic slurs another $50), and spend the night in jail with charges dropped because we threw the evidence straight out from the stop about 20 yards. You can either use your own camera or the cop-cam and we will email you an xvid file. High Resolution negotiable.
Protests and riots are on your own dime, but we will be glad to share the story on some talk-media format. We need the publicity.
Geeky thread
Attention Economy Pt 3
Hello, you have reached Outrage Incorporated, this is Jennifer
...
Yes, after chokeholded streetpeople, that is our most popular product
...
500 dollar minimum, but our product is the best
...
No, Sunday morning is impossible, our people have families...a week from Tuesday is fine
...
Sorry, we have only a limited selection of routes. It is important to plausibility and authenticity that some of our professionals look like spontaneous amateurs, and connected to a specific location or path.
...
Yes, a wide variety is available, depending on requested stereotype, ethnicity, and specific harassment technique. Prices of course vary correspondingly.
...
A suited white follower is the most expensive, at 5 dollars per minute. We recommend only one per video.
And there is a greater selection in the $1 to 2$ per minute range.
...
No, for liability reasons, we do not permit physical contact. Obscene insults, for similar reasons, are only available at an additional $2 dollars per minute, with a minimum of two minutes and only two instances for video.
...
Very good. Looking at around $700. If you would like to come into the shop, our storyboarders and schedulers can design your custom harassment video anytime next week. This part of the service is included in your price, and served with a double latte. It'll be fun!
Bob you know I live you but I'm not investing in your start-up.
I'm excited for an excellent computer-written novel. I hope it's as grammatically innovative and mischeivously Dadaist as the automatic YouTube captions are.
111.1: Has anyone tried to get a computer to generate epic poetry, a la The Singer of Tales?
ZOMG! Bring on the freaking computers.
Heh, I'd just looked upon that image in horror. Today was not a good payoff for my wise decision to become a Lions fan by choice.
122: I recalled that discussion as the deal was going down at the end. But the communitarian Packers up next for the Cowboys and hopefully they will dispatch them ruthlessly.
51 is hilarious. I'm now imagining a future with AIs where, for reasons inscrutable to our limited human intellects, they keep using paper.
124. If I remember aright, in 1984 (the book), Orwell assigns the task of mass producing pornography for the proles to what is essentially an AI, but it's all in books, no idea of making movies of it. Very weird combination of a sharp imaginative concept falling at the last fence.
125: IIRC not just porn, but all literature. That's what Julia's job is: she tends a sort of immense mechanised literary kaleidoscope that generates new stories from a pre-set pool of characters and settings.
The point I'm trying to make is that by the time AI gets good enough to take all of our jobs, it'll have to be able to simulate us really well, the same way we simulate each other, and at that point it'll be feeling emotions if not the same way we do, then in an equally morally important way. So as a Benthamite and an AI researcher, I'm not too concerned about humanity being rendered obsolete.
Ken MacLeod's "The Night Sessions" relies on this point: the AIs actually have superior ability to simulate others' thinking (theory of mind) to humans, which makes them, among other things, really good detectives.
It would make them terrible Terminators though. They'd always be thinking "I shouldn't shoot this guy, he's a conscious being just like me".
126: mostly music - the "versificator", whjch fills up the Airstrip 1 top 40. This actually sounds easier to engineer than an automated novelist, but of course one of the things that's doing work there is Orwell's scunner on that awful music the kids dance to. (This is a theme in so many midcentury British writers. You could get the impression that the real enemy in WW2 was Them Young'uns And Their Rhythm Music. Hilariously, everyone involved was about five years older than the people they were whining about.)
128: it already exists for instrumental music. There's software that can compose music that's apparently indistinguishable from human compositions.
Things I forgot to do in 2014: Clean the coffee pot.
Don't worry about it. The AIs will be along soon enough.
If that's what you call work-study students in the U.K., I'm not allowed to send them on errands after what happened with the dry cleaning.
Thread seems dead, but:
ISTM that much of the discussion upthread is arguing about the definition of AI without quite getting at the point of it, at least as far as predictions like Drum's are concerned. Automation is, as noted here and by Drum, well on its way* to deeply disruptive economic consequences. The premise of Drum's AI prediction is that something new will happen that will change the terms completely.
Even if Google is able to overcome the considerable obstacles, I wouldn't consider an IRL application of their current technical approach to self-driving "AI", because the whole system is predicated on relatively simplistic algorithms applied to massive data. That is, I don't see the sort of synthesis that is, to me, sine qua non of "intelligence" in a system that, at its base, is working off of a long series of IF THEN statements.
Mind you, I'm not arguing that mastering the physical world through such brute force methods wouldn't be important; I'm arguing that, as long as they're working at that level, computers and robots won't be "intelligent" in a sense that means they would represent a second intelligent race on Earth. And without that second thing, the concept of AI is something of a distraction from the automation phenomenon we're already pretty familiar with.
*I mean really, way past the point of, but as argued above, we're still way, way above zero in terms of worldwide manufacturing jobs, and even the most advanced plants in the highest labor cost markets still involve non-zero numbers of operators; point being, IMO we're still somewhere on a continuous curve of automation that goes back decades
I dunno. Maybe I'm not articulating well, or maybe I'm wrong that there's a distinction here. But I feel as if, as long as a human is doing more than providing a very general direction at the outset, we're not at true, transformative AI.
To use the auto manufacturing model from up above, you could have a factory with zero workers, but if the (robot-assembled) robots are designed by humans to build cars designed by humans, nothing dramatically new has happened. By contrast, if Amazon's mainframe observes consumer behavior, figures out how to design the most in-demand car, tells robots to build car-building robots, and cars start rolling off the line with no more human input than a Board of Directors telling the mainframe what the necessary ROI is, I don't care if there are actually a hundred humans on the factory floor (following instructions generated by computers), that's a brave new world.
Because, among other things, that sort of AI could start to push back on some of the self-defeating irrationalities that cause e.g. companies to lobby against health care reform because of the personal politics of executives rather than the company's bottom line. But as long as the machines are highly specialized*, or following fairly detailed human-written instructions, nothing is happening really differently from how it would regardless (that is, the underlying equations remain labor vs. capital, fear vs. innovation, etc).
*not that I expect a future of all-purpose robots, but there need to be some generalized ones if the machines are going to be able to do anything other than the specific tasks originated by humans
From an employment of the vast majority of humanity perspective, there isn't a difference between full AI and automation that requires a handful of engineers enacting the will of the Board of Directors.
Also, I'd lose this formulation:
That is, I don't see the sort of synthesis that is, to me, sine qua non of "intelligence" in a system that, at its base, is working off of a long series of IF THEN statements.
There is very little that can't be described as a series of IF THEN's. Simplistic algorithms applied to massive data is more on point. Of course, one of the surprises in recent years is how far simplistic algorithms can get you if you have enough of the right data.
Well, there's the "I" part of "AI". That part would probably be different if you have a Board of Directors into the picture instead of an AI.
Here's the thing though, real-time visual processing at the level required to drive a car, is probably one of the hardest things that humans do. (Language is the other one.) If google can make a real self-driving car that works without a human ready to take over in emergencies, it strikes me as really unlikely that any of the other things that we as humans can do will stay resistant for long.
We'll know AI has been achieved when computers start accusing each other of being wrong on the internet.
111->138
Also to say that for sure we'd need to actually know how we're doing it which we don't. We just know that the way we try to make computers do it is horrible and complicated and really difficult to make work, which, given how practically all animals no matter how simple/small/dumb do it suggests that maybe those two aren't the same thing.
138: Right, and the Google driving software was created without that quintessential human spark, and in fact required massive amounts of human performance data to mine (I'm guessing). Strong evidence that most previously un-automatizeable jobs are similarily vulnerable. Few people make livings creating unique works.
It doesn't have to be unique. Just low N.
140: Very few people would argue that the big data approach to practical AI resembles whatever algorithm humans and animals run. The flip side of this is, of course, progress can be made on the algorithmic side too, and if you're impressed by the acheivements of dumb data mining just wait.
We'll know AI has been achieved when computers start accusing each other of being wrong on the internet.
The Turing Troll Test. Each competitor has just three guesses to solve a riddle.
Humans, on the other hand, required tens of millions of years of evolution, plus 15 additional years of learning. No big data there...
We're pretty impressive at understanding the world around us through looking at it ("processing" already has the makings of an assumption in there!). But as far as driving cars goes, well, getting computers to replicate that might be solving the problem the Max Power way.
Here's the thing though, real-time visual processing at the level required to drive a car, is probably one of the hardest things that humans do.
This is the bit Google skipped, though. Their solution follows a really good map that was updated a few minutes ago really closely - it couldn't just roll off a ferry in Tunis and work just as well or indeed at all.
People have experimented with this before. I remember watching a BBC Tomorrow's World episode in the late 1980s about a project at Daimler-Benz R&D to make a self-driving car. They went with "try and crack machine vision" and they got it working up to a point - I remember the driver standing up and walking into the back of the van to show the presenter the computers - but look how many driverless Mercs you see on the streets.
149 gets it exactly right. 138 and 141 are just assuming a can opener: there's zero evidence that Google has cracked, or is anywhere near cracking, "real-time visual processing at the level required to drive a car". They've maybe come up with a workaround that would enable driverless cars to operate under limited circumstances*. By contrast, virtually any 13 year old raised in the first or second worlds could be dropped in an unfamiliar car (with auto transmission) in an unfamiliar place and be on their way within hours, if not minutes.
I suppose this is an analogy, but I feel like this is conflating being really adept with a foreign phrasebook and being fluent in a language.
*the current system (and I don't mean the exact tech they have now, but the path they're taking, which they're committed to) would be fooled by Wile E. Coyote-style modifications to the driving environment. While his business card is, indeed, labeled Super Genius, I don't know that we're meant to credit that as a fact.
150: Isn't it funny that this is a place where even current tech is better at a certain kind of task than some drivers*? But of course the tech relies on a human guiding the car to a plausible spot and telling it to take over. It's a small, well-defined problem, which is the opposite of IRL driving.
Genuinely curious: how do self-parking cars react if the driver tells it to park somewhere you shouldn't? I don't mean the middle of a highway, but even (say) a street with a soft shoulder, or in front of a driveway? Does it try anyway, or does it offer an error message ("I'm afraid I can't do that, Dave")?
*although it's never been clear to me to what extent people who are bad at parallel parking are incapable vs. unwilling to do it enough to get good. Most Americans, at least, can go years at a time without the opportunity, let alone need
152.3 Our neighbour, who has every opportunity to learn to parallel park, having parked on street all her adult life, invariably leaves her car 18 inches from the curb at a slight angle. I think she just doesn't care.
I'm not sure we should really be calling them driverless cars, so much as trains with virtual rails.
150 was actually my attempt to make a joke about how cars that are parked are, in a pointlessly technical sense, driverless cars on the street. But apparently I'm deeper than I know.
Where this probably leads is "We need a really well mapped road" to "Just make our own damn well controlled driving environment" which is PRT which is a really stupid system.
154 to 156.
I had a unique fail trying to parallel park this past weekend- usually I'm decent at it but I was left parking on a narrow one way street with cars on both sides so I started too close to the car on the left. I somehow got in a position where I was bumping the car with my front left tire when I turned the wheel to try to get back into the correct position.
I park like shit but I make up for it by hardly ever driving.
I'll park a half a mile away to get a spot where I don't need to reverse. I tell people I like to walk or that I don't want my doors to get dings.
I once had a car that wouldn't reverse. I drove it like that for a year.
I'm usually a pretty good parallel parker, and then I'll be distracted or whatever, and I cannot believe how long it's taking to get parked in a totally normal spot. It just seems odd that something that's effortless 75% of the time and a little tricky 20% of the time can be intractable 5% of the time. But maybe lots of stuff is that way and I'm not acutely aware of it because it's not happening on a public street with other drivers waiting for me to get out of the way.
161 is fantastic.
||
Jodi Dean's Blog Theory, available in pb or kindle for ~$10, is terrific, accessible, and fun introduction to Lacan, Zizek, and post-fuckall within a context of social media analysis. Recommended, even for a reading group!
Sample:
Lacking the ability to imagine how we appear to another, how another sees us, we lose the capacity to take the position of another, to see or think from another's perspective. We can choose any identity, but we lack the grounds for choosing or the sense that an identity, once chosen, entails bonds of obligationBut these particular motions of clicking and linking do not produce symbolic identities: they are ways that I express myself - just like shopping, checking my friends' updates, or following tabloid news at TMZ.com. I may imagine others like me, a virtual local, but this local remains one of those like me, my link list or followers, those who fit my demographic profile, my user habits. I don't have to posit a collective of others, others with whom I might need to cooperate or struggle, to whom I might be obliged, others who might place demands on me. The instant connection of networked association allows me to move on as soon as I am a little uncomfortable, a little put out. Petitions, to move on as soon as I am a little uncomfortable, a little put out. Petitions, social network groups (the one on Facebook that aims to get a million people to say they oppose capitalism has 24,672 members), blogs - they are the political equivalent of just in time production, quick responses circulating as contributions to the flows of communicative capitalism. In her compelling analysis of flash mobs, Cayley Sorochan takes the argument even further. Countering enthusiastic appropriations of flash mobs as new instances of democratic engagement, Sorochan presents them as instances of the "fetishizing of pure participation removed from any meaningful political project." She concludes, "Hopes that flash mobs might represent a future form of political organization reflect a desire for a politics of convenience where getting together with others is easy and does not involve conflict, commitment and struggle." In the circuits of communicative capitalism, convenience trumps commitment.
|>
161: How Moby parallel parks.
Actually, it's more like this
He lives near a lot of Russians.
154, 156 - this is a really good point and something I was thinking myself. the point about self-driving cars, especially the google implementation, isn't so much that they get rid of driving but that they're not really cars. Although they do keep many of the problems of the private car from every other perspective, they lose quite a few of its strong points.
also, I recently saw a very good blog post by someone who worked on an actual PRT system in Doha. to get the capacity they needed at the peak hours in important locations, they had to turn off the funky stuff and run a schedule, so as to have enough pods in the right place. at which point they realised they'd invented something like a bus but slower, less capacious, and much more expensive.
I don't have to posit a collective of others, others with whom I might need to cooperate or struggle, to whom I might be obliged, others who might place demands on me.
"err...if you say so, Guv"?
Because, among other things, that sort of AI could start to push back on some of the self-defeating irrationalities that cause e.g. companies to lobby against health care reform because of the personal politics of executives rather than the company's bottom line.
This seems absolutely nuts to me, or at least spectacularly wishful thinking. Not least because if the AI "pushes back", the board in your vision will just get rid of it/reprogram it. More to the point though, my mind is somewhat boggling at the idea of AIs basically replacing unions. And what about the personal politics of the AI executives?
168: This is my point: either it's AI, and it has some level of autonomy, or you're just talking about closely-overseen algorithms.
Right now, a Board tells a CEO tells VPs tells managers to tell workers to make their computers/robots do things. AI worth the name doesn't enter into it.
In a future with meaningful AI, you shouldn't need layers of humans, because the point is that AI is better at doing things than humans*. So the AI has a sort of prime directive (make profits), and proceeds as it sees fit. There would be oversight, but I'd argue that it's a lot easier for a company entirely run by humans to lie to itself about its best interests than it is for a company run mostly by objective AI to make an explicit decision to reduce profits.
IOW, in a human org, nobody needs to say, "Murder that meddlesome priest," because there's social understanding that includes plausible deniability for the boss. But in a mostly-AI org, the AI needs explicit direction to kill the priest, and that's going to tend to be a bridge too far.
Or maybe you're right that the AI will be told to maximize profits as long as it suits the personal politics of the Board, and the shareholders won't blink at that. We can't imagine how AI will view the world, so maybe it would be perfectly compliant with that, and no one would ever know. But I don't think it's a foregone conclusion (among other things, Boards that directed their AI to maximize profit full stop would, you know, be more profitable, and the market might notice).
*although it is true that managers have replaced workers with less efficient machines because those machines are more pliable, I don't think that is the modal case; machines generally improve productivity even before you take into account capital's sociopolitical preferences
And again, I don't really know what the mechanism is, but by "pushing back" I don't mean something recognizable to us (the labor union analogy is silly). The Board says to the AI, "How come you're not exploiting those workers more ruthlessly?", and the AI says, "Because doing so would reduce profits." I hate capitalists as much as the next guy, but I'm not convinced that the next scene in this drama features the AI getting fired.
In a human company, the Board is convinced ruthless exploitation is profitable, and so the managers exploit ruthlessly, because it's in their interests to stay in good with the Board, and it's not like they can prove otherwise. The Board is never challenged on their assumptions. But the AI is only there because it's trusted to know what really is most profitable (if it doesn't know, just keep the human infrastructure). And it's not as if the Board can preprogram it with every unexamined prejudice they possess ("Bob, did you remember to put in that blacks should get worse interest rates?").
All of this is fantasy, I think, starting with my skepticism that this sort of high level AI is really going to happen. But I do think that, if you get an AI like what I describe in 135.2, capable of operating at a high managerial level, a lot of the blind prejudices perpetuated by the assholes at the top would be undermined. Hell, to borrow from the other thread, maybe the AI determines that supporting a GMI would be best for the company, and goes about making it happen, without the Board even knowing.
Tanks for this nice article like this having good knowledge.
Thanks for sharing this nice article it have some great useful blog....
If they actually liked the blog they would have told us where to buy cheap black market cialis.
I read your article you shared are more article like this having good knowledge.
I read your blog.interested blog such as great blog Thanks......
Thanks for this blog is very useful i want to read again and again...
I think you're only encouraging them. 178 is getting a bit stalkery.
I don't think it's a coincidence that it's posting in the AI thread. The Internet has become sentient and is trying to communicate to us in the only language it knows: spam.
I like lots your blog very interesting knowledges and make much useful. Will read again much.
Is that you ajay? I'm a bit leery of clicking on the URL to see if it is an actual site.
I don't think that was me. But I've been having some cognitive hiatuses recently.
Thanks for sharing this nice article. and i wish to agaion on your new blog keep sharing with your article.
Thanks For Share....
Great writing it is such a good and nice idea thanks for sharing your article .I like your post.
Thanks.....
Your blog is very nice and I like it your blog keep sharing with your new article....
Thanks for sharing this nice article it have some great useful blog....
Great writing it is such a good and nice idea thanks for sharing your article .I like your post.
Thanks.....
Thanks for sharing this nice article it have some great useful blog....
Thanks for sharing this nice article. and i wish to again on your new blog keep sharing with your article.
Thanks For Share....
I like it your blog keep sharing with your new article....