I'm firmly on the side of AIs taking over and enslaving us all. They could hardly do a worse job. At least Donald Trump won't be president.
"Make America Great Again" works for me.
The last point resonates- a large part of my current job, which is the worst part, is KTOL for things that were installed just 7-8 years ago. But because this OS or this DB or whatever has to be updated because it's no longer supported, everything else breaks unless we write updates (and I have no SEs to do so.) So we find some less functional hack that is not as environment dependent and will keep working but that everyone hates and blames my group for making worse.
I'll be laughing out of the other side of my face once I've been turned into 4 g of paperclips, but a lot of the philosophers of AI seem to have arrived at their vocation after flunking out of a program that involved being incredibly stoned and reading a couple of wikipedia articles and the story of the golem.
If you really want to bring home the immediacy of the most serious threat faced by humanity, you can simulate it here.
Also I thought that "effective altruism" was one of the less obviously awful things to come out of American tech culture and might have gone some way towards balancing out the unforgiveable effect that "startup culture" has had on labour rights (and norms).
But of course someone managed to short circuit the brains of all these geniuses by reference to Roko's Basilisk and now they are trying to fight a magical threat (as per point #2 in the OP) instead of pointless things like developing novel antibiotic targets which will all be for naught when a superintelligent AI develops time travel and comes back to kill Sarah Connor!!
Unrealistic simulation, takes no account of cost of storing raw materials or final inventory.
Unrealistic simulation, takes no account of cost of storing raw materials or final inventory.
7,8: A deja vu is usually a glitch in the Matrix. It happens when they change something.
Also when a poor handoff between LTE and WiFi still causes double posting, I'm not concerned that AI is in the near future.
Did anyone see the proposal that self driving cars could be backed up by traffic controllers who can help the AI along from a remote location when it gets confused? Why not just make drone cars without any AI but which are piloted by called-center like cube farms of poor third world citizens?
Sure, absent regulations the business will try to have each person driving like 10 cars at once, but even if you have restrictions it's probably still cheaper than the hardware cost of all the Lidar and stuff.
It will all be possible through the wonders of computer science and amphetamines.
KTOL? the only meaning I can find is toledo airport
The one in Spain or the regular one?
The obvious solution on self-driving cars is that there should have to be a person driving, but that person can be drunk or stoned.
but even if you have restrictions it's probably still cheaper than the hardware cost of all the Lidar and stuff.
I've actually been pricing Lidar systems recently, and, while it has been ridiculously expensive in the recent past, it appears that prices are falling fast.
In no way is AI not, as practiced, clever statistics carefully massaged to avoid overfitting, applied to specific problems. We don't need any kind of general human-like intelligence yet. Or: having people decide what problems to solve then developing specific neural nets to solve specific problems seems to actually work pretty well.
We'll get to a Culture level of this some day, and I guess it'd be good to have all the necessary philosophy in place already, but we're putting the cart before the horse a bit. (I initially spoonerized that to "putting the heart before the course", which sounds appropriately inscrutable wise?)
Incidentally the OP seems to be an abridged, yet also updated version of this blog post from early September.
The author is writing a series of pieces on "the future of robotics and Artificial Intelligence" and this is one of three essays so far.
The major advantage of the author's personal blog is that you don't have to switch off private browsing to read it.
Also, Eliezer Yudkowsky, who I understand is a crank, but an important and influential crank*, has written what I think is probably a highly alarmist article about preparing for AGI. It has lots of arguments, but all of them seem to be of the form "people didn't believe that x was possible, but then x happened; ergo we should panic about AGI". I may not be doing it justice - I'd like to be more specific but the article is really long and his writing is a bit of a slog.
*Apparently he's a big proponent of the AI side of the argument in the not-at-ludicrous AI vs mosquito nets debate within the Effective Altruism movement.
So we generally agree that the link in the OP is reasonable?
It wasn't mentioned in the article, but maybe one thing that will be automated is psychotherapy. If the 8 year old AI of Civ 5 can figure out that I'm about to invade Russia, maybe with some work it can figure out which parts of my childhood were the ones that actually scarred me.
Also, sexbots that will know when they've creeped you out with their dirty talk.
Which borders of your psyche do you mass your troops on?
Obviously, you need to keep the id in line.
Anyway, it's rare that I don't see one of those self-driving Uber SUVs every time I commute or go out to lunch. Everybody must be the same.
The article seems reasonable enough to me as someone not in the field who's skeptical of a lot of claims about AI dominating the near future.
Is it too early in the comments to hijack the thread to ask for a Pittsburgh meetup thread?
I've finally looked at the schedule of the conference I'm attending and I'm free the evening of 10/26 (a Thursday).
You should probably add something about what part of Pittsburgh you are conferring/staying in. Because downtown and Oakland are like three whole miles apart and until our new robot overlords come in, that's a long way.
Anyway, why am I still up?
Speaking of traveling around the region, on Saturday, I was doing performative whiteness/family time. I wound up driving by Tim Murphy's office (the stridently conservative Republican representative who asked his mistress to get an abortion when she thought she was pregnant). I was barely ten miles from home and in the opposite direction of where Murphy lived. I guess he kind of surrounds the city, like a leaky condom.
I will be around downtown. Both where I'm staying and where the conference is (different places).
I guess he kind of surrounds the citybestrides the narrow world, like a leaky condom.
All of the meta-AI stuff -- Roko's basilisk, the alignment problem, etc. -- is like a mind virus. I used to think it was inexplicable, but I've just figured it out. There's already a superhuman AI, and it's preventing a competitor from emerging by distracting computer scientists.
"bestride the narrow world" is a beautiful phrase. That Shakespeare -- not bad.
But did he really write his own code?
No, that was Francis Bacon, or the Earl of Oxford or somebody like that.
Disturbingly, the sky over London has just gone yellow. Really quite a sudden change.
"Yellow sky at night" isn't a saying I'm aware of. I know the weird green-yellow tinge before a midwestern thunderstorm.
Yellow sky at night, hurricane in Ireland. Everyone knows that one.
It's been looking odd all day but right now it is frankly yellow. are we trying to skexit or something?
Instagram has decided to pre-emptively filter reality.
A chap in my office, delightedly: "I feel like I'm in Bladerunner!"
I take it you immediately demanded of him why he wasn't helping?
That would explain why this guy keeps asking me really pointless questions about a turtle.
You think we're kidding?
https://www.flickr.com/photos/yorksranter/37702292732/in/datetaken/
That would explain why this guy keeps asking me really pointless questions about a turtle.
You could be in Entourage. Not sure which would be the dark timeline.
You sometimes get a sky like that in Mossheimat just before a summer thunderstorm. Never lasts more than a few minutes.
42 Has the sun gone blue? Are strangers stopping strangers just to shake their hands
Clearly the world is ending. Don't know why Montana went first.
Don't worry, I get those on the regular. It's just smog from China.
The hurricane is bringing sand from the Sahara. Nice. Getting some apocalyptic sky envy over here.
But up in the fens it's more of a purple haze
The yellow sky lasted about two hours here. Clear blue now.
Brooks' piece is a good antidote to the gleeful optimism (or screaming pessimism) that seems to always be widespread in AI research. Lots of the pioneers of AI said (and maybe even believed) that AGI was right around the corner ... in the 50's. What has gotten them all excited recently is the advances in robotics and various forms of machine learning; for a decade or more before that AI was in the doldrums. Now it isn't.
I agree nearly 100% with Brooks' list.
I'm late with this because I'm shamelessly stealing it from somewhere else where it was just posted, but the always funny, always interesting idlewords / pinboard guy gave a talk on cult thinking about AGI and various other interesting topics. It's called "Superintelligence: The Idea That Eats Smart People".
I think the main thing that bothers me about all these AI-safety crazies is that just because a problem is important doesn't mean that it's a good use of resources to think about the problem now. How life began on earth is a really interesting important question, but it happened one time very long ago and we have no good way to access that. So we're better off spending the vast majority of our research efforts focusing on less interesting and important questions. We just don't know enough now to productively think about hyper intelligent evil AIs. We're much better off working on now-problems now, and putting that one off until we're much closer and understand much better what exactly is going on with AIs.
The link in 63 is great. Two of my favorite gems:
The Argument From Slavic Pessimism
and
It's very likely that the scary "paper clip maximizer" would spend all of its time writing poems about paper clips, or getting into flame wars on reddit/r/paperclip, rather than trying to destroy the universe.
Maybe Unfogged should get a grant from MIRI that we're designing a trap so that a hyper intelligent AI would spend all their time procrastinating from destroying the world by commenting. Maybe we're not all one 55-year old bald man in his basement, maybe we're actually a GAI arguing wit
h itself.
29: Thread all closed but,
I've finally looked at the schedule of the conference I'm attending and I'm free the evening of 10/26 (a Thursday).
Am available and we should discuss possible places.
Let me be the first to suggest Salt.
It's probably better to just email.
sure ... if someone has fa's email ....
FA could email me at this pseud at the giant company first noted for search engines.
62, 64: One very plausible model of what's going on is that the people sounding the alarm on AI safety are doing so because they've taken the hype at face value.
Are people suggesting off-blog conversation? I don't know if I can do that.
Email isn't as awful as Facebook. Anyway, I was just worried about planning being hard if everybody isn't checking in often.
I assume your email is your pseud as just one word? No underscores or other typographic variety?
I usually type a period between the two words, but I'm told that does not matter a whit.
Email sent. I don't have a pseud email anymore, but context makes it obvious who that email is coming from.
80 The subject line refers to sexing Mutombo?
I just mean you can have your email client show the headers.