This part from Drum was particularly over the top: superhuman AGI will solve both our energy problems and global warming.
Not all problems are due to insufficient intelligence.
messed up a tag there. The last line is me, not Drum.
A different way of thinking of these problems that ChatGPT can solve, is "gosh, those problems weren't very hard, were they?"
Ask it a hard problem. Write a program with a subtle bug in it, and see if it can find it.
I've asked it things like "function to take a quaternion and produce ZYZ Euler angles", and it fails With overweening confidence, but still, it fails. What's the difference between "produce a database schema for problem X" and the question I asked? Well, there are many different Euler angle combinations, and for each there is a function from quaternions. It takes knowledge of trigonometry to actually do the work, and what's available on the Internet isn't actually complete ..... unless you know trig and can apply it.
And something else: the vast majority of programmer time is spent .... in maintenance, not writing new code. Let's see how well ChatGPT does at debugging, before we decide it's the bee's knees?
Honestly, I deal with problems like the one horse people thought could do math dealt with math problems.
I think you're making this up because database administrators don't get laid.
I guess people have other jobs first.
It's wildly inaccurate in my field. Funny thing is they published a use case in that area and some AI people on Twitter were raving about it not realizing the answers were totally wrong because they were written confidently.
Yeah. Looking confident as a fifty-something white guy works when the clomping doesn't.
I think the most immediate effect of AI is going to be really shitty customer service as they replace people with chatbots and figure out a metric that shows the bot is better.
I'm just tired of hearing that it's the death of the humanities when it's best at replacing junior coders.
I love the "dumb Monty Hall problem" case, which seems to show pretty clearly the limits of regurgitation versus thought:
https://twitter.com/colin_fraser/status/1626784534510583809
Ogged, have you been reading Apperceptive?
https://apperceptive.substack.com/p/self-serving-thought-experiments
I'm looking at building some kind of AI application. The UN has a database of 30,000 documents from disarmament meetings going back to 1946. Various diplomatic statements about implementation details of biological weapons treaties and such. My plan is to hook it up to an AI summarizer and have it come up with a plan for achieving world peace.
Or, if not that, at least build a system for providing somewhat accurate meeting summaries.
Huh that writer seems like a smart guy. I'd like to meet him.
If self-driving cars could be taught to crash gently into the side of a human-driven car that almost hits a pedestrian or does a hit and run, I think they would make the roads safer overall.
It's wildly inaccurate in my field. Funny thing is they published a use case in that area and some AI people on Twitter were raving about it not realizing the answers were totally wrong because they were written confidently.
Yeah, the problem with these things is that they're confident enough that people think they can trust them, but they're inaccurate enough that they actually can't. And there's no real way to tell unless you already understand the underlying subject, in which case the bot isn't actually adding much (if any) value.
They're good for quicky producing highly formulaic types of writing with highly restricted parameters, and I think we're finding out that that covers more types of writing than most people had realized. But it's far from everything you might want a bot to do, and if you stray beyond the highly formulaic and restricted parameters parts the results get ugly fast.
They're notoriously bad at math, for example. If you ask an arithmetic problem it will probably give a wrong answer.
The dumb St. Ives riddle was funny too*.
As I was reading Ezra's column on AI I thought I recognized sentence structures that seemed like ChatGPT (specifically the sections where he asks some questions to start each paragraph and gives a few sentence answer.) I was sure it was a gimmick where he'd reveal at the end that part of the column had been written by LLM. But I guess he was just writing like one.
*Roughly- I was going to St Ives, and my seven wives were coming with me, along with seven cats, all of us going to St Ives. How many were going to St Ives? Answer: 1
This part from Drum was particularly over the top: superhuman AGI will solve both our energy problems and global warming.
Not all problems are due to insufficient intelligence.
I think it was Timothy Morton who said that the singularity is already here -- it's called capitalism.
Annual Gross Intelligence, taxed only on the first 60 points of IQ.
A different way of thinking of these problems that ChatGPT can solve, is "gosh, those problems weren't very hard, were they?"
This is indeed how people have thought about every problem that AIs have solved, after they have done so. I'm old enough to remember "sure, computers can play grand-master-level chess, but I'll only be impressed when they build one that can play Go at champion level. Go's a much more subtle game, you see..."
They're notoriously bad at math, for example. If you ask an arithmetic problem it will probably give a wrong answer.
Clearly the next step in AI will be developing a ChatGPT variant that is attached to a little robot arm with which it operates a pocket calculator.
13 sounds really interesting, and (to my uneducated eye) not impossible, because treaties are written in a very stylised and formalised language which is deliberately meant to be unambiguous, and that might make the job rather easier.
I work, professionally, a bit on problems like 13. Summarisation, named entity recognition, text classification, etc. These days, main stream NLP libraries (like Flair or Spacy) and transformer based tooling (Bert, etc) are pretty good for that sort of thing and the code is well-documented and there are loads of good examples you can crib from on Medium.com, or on Huggingface or the Flair, Spacy, or AllenNLP sites.
If you wanted to do some summarisation, some keywording (for quick document classification), and pull out names of people and organisations, you could probably put something together in a couple of days or less, if you just wanted a quick and dirty approach. If you wanted to train new models to identify and pull out legal terms, say, or topics that are specific to that corpus of documents, it might take a little more, but not that much more. There's stuff like: https://github.com/ICLRandD/Blackstone out there already (it's quite old, so isn't going to be up to date, but it's there as a model).
re: 13
If you want to punt some questions at me, feel free (you may already know way more than me, in which case, ignore me).
The single best way I've seen to think about it is "what office work could you do easier if you had access to an army of very helpful, enthusiastic stupid people"? And that's not nothing, but all this "oh it's going to solve global warming stuff" is crazy-making. Is Philomena Cunk going to bring us Utopia? Here, for an example, is someone who had the brilliant idea to have ChatGPT (v3) try to play chess against Stockfish, the best open-source chess engine. It's amazing.
The single best way I've seen to think about it is "what office work could you do easier if you had access to an army of very helpful, enthusiastic stupid people"? And that's not nothing, but all this "oh it's going to solve global warming stuff" is crazy-making.
That is a good way to look at ChatGPT, but not a good way to look at the topic of AI in general, of which ChatGPT is only one part. ChatGPT couldn't solve the protein-folding problem either, but DeepMind built an AI that could, three years ago. It isn't prima facie ludicrous to think that future advances in AI could lead to, idk, improved models of the way plasma behaves in a fusion reactor, or interesting new materials for batteries and solar cells. Maybe it won't, but it seems to be at least plausible that it could.
The "haha AI is useless because ChatGPT can't play chess" example is particularly odd. What's it supposed to show about machine intelligence that ChatGPT cannot beat a chess-playing computer program?
idk, improved models of the way plasma behaves in a fusion reactor
I assume you mention this because you know it's a thing?
That's fair! I just don't think we're going to get to strong AGI, which Drum is asserting we will. (And I definitely don't think most ML applications are on the path to AGI; protein-folding example or, e.g., face recognition or automated StarCraft play are ML examples that don't seem to map onto the claims people are making about super-human level general intelligence, any more than Boston Dynamics getting their robots to be able to a backflip is.) ChatGPT and similar LLMs is what I think is getting people to whip out their HAL/Wintermute/C3PO notions about the forthcoming world of Smart Machines: AGI will be able to do pretty much anything humans can do, and shortly after that it will be able to do more than humans can do.
"what office work could you do easier if you had access to an army of very helpful, enthusiastic stupid people"?
Phone sex hotline.
31: I mean, I can see this one? Deepfake audio + voice recognition is close to good enough, and it's not like the customers are going to notice when the large language model gets the math wrong about how much it owed that strapping young pizza delivery guy.
The problem isn't the technology, the problem is I'm not sure if the 1-900 billing infrastructure still exists and is comprehensible to people.
It's not that it couldn't beat a chess playing AI, it's the way it played.
Oh, yeah. Ajay, if you didn't click the link to watch the whole game, you should.
I assume you mention this because you know it's a thing?
I didn't know it was a thing that Deepmind was working on, I just knew it was a really difficult problem in the development of fusion reactors. But that's great news!
I tried to watch the full match but the video doesn't seem to be playing more than the first five seconds.
It makes a series of illegal and increasingly bizarre moves (rook moves diagonally, Queen hops over a row of pawns, captures one of its own pieces, etc.)
It just stopped for me too. I'll try a different browser when I have a chance.
It makes a series of illegal and increasingly bizarre moves
It's a disruptor!
so one of the worst nightmares of managerial work is an army of enthusiastic workers who don't understand what they are doing. this is an absolute horror show. "work" is being done, to no effect or actually making things worse, and yet why aren't you the manager showing results??? "work" is being done!
although i'll hand it to k drum, he makes reminds me to be grateful for my father not being an obtuse patronizing asshole about public policy as he (my father) gains on 90 years old. go dad!
25 is helpful. This is a side project so may take some time, but I'll hit you up with questions.
27 You know it just occurred to me this describes the most disastrous combination in Hammerstein-Equord's classification of officers, it's both diligent and stupid.
It's easier to make people less diligent than less stupid.
re: 42
Christ, yes. I'm dealing with exactly that at the moment on a complex software project, which has the lovely combination of being: 1) late, 2) underfunded, 3) has a diligent, hardworking, really nice, but just not conceptually very smart developer leading on one component. It's a total nightmare. It would absolutely be quicker to do everything myself.
That said, I have occasionally used Github CoPilot to bang out boiler plate code for things, and it has been moderately useful. It's sometimes wrong, but it saves on sheer typing (I can read the code, so I know if it's generated crap).
Geez, tough audience. I don't have a good intuition about whether GPT is a significant step towards AGI, but I would say three things with confidence:
1) ChatGPT is impressive, as is.
2) Something that could consistently generate correct responses (or, "I don't know ") to natural language questions would provide a lot of value. ChatGPT isn't close to being consistent, but we'll see how quickly it improves.
3) The dumb riddles are funny, and what they demonstrate is the difficulty of natural language processing.
Put a ChatGPT interface on top of the specialized AIs (fusion reactor design, chess, protein folding)! Checkmate libs!*
*Included because LLMs tell me that's the most common thing to say after making a contrary point.
Supposedly GPT4 can get a 4 on AP Calculus BC, so I wouldn't bank on it not being able to do basic math much longer.
I always had a theory about Calculus grades, which is that they're typically bimodal because there are two types of students: a group of students who understand the material and only miss things when they make some calculation error (giving a Poisson distribution that's centered near 100) and a group of students who don't understand what's going on and randomly try things they learned about (giving a normal distribution centered around a B-). But there's some overlap here, a student with a high B+ or low A- could be in the first group and unusually error-prone or in the second group and unusually competent.
Chat-GPT is the perfect student in the second group.
Brad DeLong's description of this ChatGPT output as falling into the uncanny valley is spot on.
https://braddelong.substack.com/p/testing-out-internet-search-enabled
27 is so funny. Just killed me.
(For those of you who aren't chess players, it starts out extremely normally, then ChatGPT makes an illegal move, then stockfish starts clobbering it, and when it realizes it's losing it just starts cheating like crazy.)
Anyway, I wasn't into ChatGPT and then I realized you could make it write Psalms about any topic you want, and now I love it.
it's best at replacing junior coders.
Orthogonally, what consequences flow from the deepened incumbency advantage/bias for languages? (Assuming the usefulness of LLMs, and hence productivity of coders using them, scales with the corpus of existing code.)
42. Understanding management's goals requires honest, competent, and reasonably transparent management. In many contexts, asking why something is done is basically a bad idea. Often the answer to why is something like 49, except the person at issue is in charge of something and maybe is lacking niceness or honesty, and everyone involved strongly wishes to avoid saying so. ChatGPT or similar won't help.
29 and similar are IMO exciting-- there are a bunch of physics problems with rich training data where either optimization in a big space now done with heuristics or a similar paralysis of many choices blocks first-principles progress. Electronic structure of solids and large molecules and a bunch of other physics problems are getting a slew of strong predictions if not explanations from these approaches. Roald Hoffman and JP Malrieu, both accomplished quantum chemists, wrote a truly fantastic set of essays about this a few years ago: https://onlinelibrary.wiley.com/doi/full/10.1002/anie.201902527
Did ogged have ChatGPT write the post?
They're notoriously bad at math, for example. If you ask an arithmetic problem it will probably give a wrong answer.
I think they added some modules for this, since it was worse in that regard than Wolfram Alpha was years ago. Previously, it got the "bat and ball that together cost $1.10" problem wrong in the same way people usually would. Now if you ask it the same question, it carefully sets up the algebra and solves it, like a passing student showing their work.
A good example where it still falls short - if you give it the question "which falls faster, a pound of iron or two pounds of feathers?" it answers as if the question had been with the items being one pound each.
I'm not confident the "certainty no matter the facts, even if facts need to be made up" aspect is one that can't be edited out, eventually. But it's more disappointing that people are already "checking facts" with ChatGPT with the apparent belief its answer will have some external validity to be relied on.
In terms of theory of mind, I wonder if AGI is even a thing - maybe if we get human-equivalent or -superior intelligence in fifty or a hundred years it will turn out to be the result of combining all the bags of tricks we've accumulated over the years, rather than some hypothetical "pure thought" function.
Wow: I just tried the "two pounds of feathers" problem again and it went even more in the wise-fool direction, starting: "Both the pound of iron and the two pounds of feathers would fall at the same rate if there were no air resistance..."
The chatbot doesn't need air. AI might kill the Earth to justify its math.
62 was basically the first draft but the studio didn't get it, so we had to do the baby-battery bullshit.
I'd be more supportive of your original vision if the sequels made any sense.
Oh Lord, I come to You in prayer
As I lay here, dying in despair
My life slipping away like a lost game
As I played Dungeons and Dragons, in vain.
My comrades fought beside me with might
As we battled the demons of the night
But alas, fate was not on our side
And in the end, we fell to the tide.
Yet even in this moment of strife
I feel Your presence, bringing new life
For You are the One who guides my way
And in Your mercy, I find peace today.
Though I may leave this earthly plane
And never roll the dice again
I know that in Your kingdom above
I'll join the greatest party of love.
So I surrender my life to You
As I pass through death's dark hue
And in Your embrace, I find release
Knowing that in You, I'll find eternal peace.
66 is great and reminds me that this story of playing a D&D session with ChatGPT is good: https://reprog.wordpress.com/2023/02/12/i-played-dungeons-and-dragons-with-chatgpt/
The chatbot doesn't need air.
All that is solid melts into ai
31,33: it's been done. Ashley Madison used to do this to keep their customers interested.
(not only done, but outed, busted, bankrupt, and now apparently forgotten)
The point is to do it well enough that you either aren't outed or that you can be open about it and still get customers.
The chess thing is weird. If I try to make an illegal move in online chess, it won't let me (it being some other computer program, I suppose).
You should try playing against small children.
73: I suppose I should clarify -- I'm not ineptly trying to cheat. If I inadvertently try to make an illegal move in an online game, the program won't let me.
hey, I want a good reporter to go back to the Ashley Madison thing and dig into this question. Everyone was expecting with bated breath that there would be a sensational wave of divorces and lawsuits and scandal but I have the impression...that didn't happen? What mostly happened was that the company itself was outed as a fraud, there were some interesting parallel stories, but the advertised side-chick Chernobyl...not so much.
71: The website still seems to exist. I don't want to navigate to it at work, but I found some recent PR-planted "news" about people using it on Yahoo News and the like.
As an AI language model, I do not have access to real-time information or updates, but as of my knowledge cutoff date of September 2021, Ashley Madison was still operating as a website.
Ashley Madison is a dating website that markets itself to individuals who are married or in committed relationships and seeking to have affairs. In 2015, the website experienced a major data breach that exposed the personal information of millions of users, causing significant controversy and scrutiny.
Since then, the website has implemented improved security measures and made efforts to rebuild its reputation. However, it is important to note that the website and its practices remain controversial and may not be ethical or advisable for those seeking genuine, healthy relationships.
It's worth pointing out that 2015 is just before Republicans decided adultery was fine.
In terms of theory of mind, I wonder if AGI is even a thing - maybe if we get human-equivalent or -superior intelligence in fifty or a hundred years it will turn out to be the result of combining all the bags of tricks we've accumulated over the years, rather than some hypothetical "pure thought" function.
Yeah, what it reminds me of is how back when artificial speech synthesis first became possible like 50 years ago some people thought it might be possible to synthesize speech that was even better than people produce naturally in terms of perceptibility or whatever (no one was really clear on what this would actually sound like so the predictions were pretty vague). Turns out no! Pure formant synthesis sounds super robotic and unnatural. It's the little quirks in the sound due to the shape of the vocal tract etc. that make speech sound recognizably "human" and the computers struggle to replicate that.
Maybe they all had to explain something to their spouses and turned out to be more convincing than you might expect.
76: This is from a law firm writing just a few months after the breach, not any journalistic outlet, but they note less surge in divorces than expected and posit three causes:
1. Data not user-friendly
2. So much gender imbalance on the site that most male users would not have actually gotten affairs out of it
3. No-fault divorce means finding a spouse there didn't increase your legal chances at divorce, just inclination (and they don't say but: the details wouldn't have to appear in court documents, so less juicy for media)
Nowadays most speech synthesis just amasses vast libraries of clips of real people speaking and strings them together. It's called "concatenative" as opposed to formant synthesis. It sounds pretty natural!
The example of 83 that people are most likely to encounter in daily life is customer-service phone tree systems. Still a slight uncanny-valley feel but they've gotten quite good.
Concatenative synthesis is much less interesting to linguists than formant synthesis and requires vastly more storage space and processing power. Turns out none of that matters!
The chess thing in 27 was truly wonderful to watch unfold. I'm so glad I watched it three times slowly picking up on what was happening before the explainers/spoilers here.
Also, it was THIS close to winning. There was some real end-game drama.
I found a CDC table of divorce rates in every state from 1990 to 2021. There may have actually been a small bump in 2016, the year after the breach! But then there was a sharp decline over subsequent years.
National rate of divorces per 1,000 population: dropped from 8.2 in 2000 to 6.8 in 2009; held at 6.8 through 2013; 6.9 in both 2014 and 2015; 7.0 in 2016; 6.9 in 2017, 6.5 in 2018, 6.1 in 2019, 5.1 in 2020; 6.0 in 2021. So clearly some suppression in 2020 followed by rebound in 2021, but something longer-term was happening pre-pandemic.
Looking at it alternatively by median rate of change across the 51 states, the median state dropped 1% in 2012 and 2013; increased 2% in 2014; decreased 2% in 2015; increased 0.3% in 2016; then -4%, -5%, -11%, +11%.
Why does the CDC do divorce statistics?
It was either them or the Department of Marriage and Families
90: The Centers for Disease Control and Prevention (CDC) collects and publishes divorce statistics as part of their efforts to monitor and track population health trends. Divorce is considered a public health issue because it can have significant impacts on individuals and families, including emotional, psychological, and financial stress. By collecting data on divorce rates and trends, the CDC can identify patterns and risk factors associated with divorce, as well as develop and evaluate interventions to promote healthy relationships and prevent divorce.
Additionally, divorce statistics can provide important information to policymakers, researchers, and other stakeholders who are interested in understanding the social and economic impacts of divorce on communities and society as a whole. This information can be used to inform policies and programs aimed at supporting families and reducing divorce rates.
Well this is a little scary: I am responsible for fixing a poor piece of writing from a subordinate, and I just saved myself a significant amount of time and effort by asking ChatGPT to do the job. The ChatGPT version was still rough, but it solved some of the key problems with the piece, and was much easier to fix than the original piece. (Hitting the "regenerate response" button, however, made the output worse.)
It sounded totally realistic when George Bush sang Sunday Bloody Sunday.
I thought ChatGPT was playing chess the way a person might play if they were playing blind (without being able to see the board) -- if you're really good you can play ok without a board, but a novice will quickly lose track of their pieces and start making illegal moves. But that's probably not a good analogy. ChatGPT is playing chess by the rules of conversation -- so it comes up with moves that seem like moves that other people make.
Where's the thinkpiece connecting divorce rates dropping with incel/MRA culture via fewer marriages likely to lead to divorce?
86: I love that the Elo rating for chatGPT in that video is 9999.
I wonder how similarly ChatGPT plays to someone who like spent an hour or two watching other people play chess but has never played and isn't totally sure on the rules. Probably pretty similar?
I kind of love ChatGPTs approach to chess. It reminds me of trying to play with a four-year-old that gets distracted and says 'here comes the unicorn!'
I kind of love ChatGPTs approach to chess. It reminds me of trying to play with a four-year-old that gets distracted and says 'here comes the unicorn!'
99: It's more fantastical, because it doesn't have any of the natural limits of a person playing with pieces and a board. A person like that might move the knight illegally, but they wouldn't think to take a queen from off the board and just put it anywhere.
101, 102: 102 is wrong for a 4-year old.
ChatGPT certainly is fun to play with but I wonder if proofing its output when its for something real will get old in the way that constantly supervising the Tesla autopilot gets old.
Admit to doing gotchas on it as well. It's not _great_ at material constitution type questions, as in, it has no idea. For example, it will cheerily tell you that Paris is _in_ France and therefore there's no distance between them, but then it insists that Reunion is remote from France by this many miles. And that your foot both is and isn't part of your body.
And then you can ask it about the lump of clay becoming a statue, and yes, it will agree that perhaps there are now two things, not one: the lump _and_ the statue. And then you tell it that the lump weighs 20 kg and the statue also weighs 20 kg. And then it cheerily tells you that yes, what you've got makes 40 kg altogether, and by the way, the lump and the statue are only two different things in some senses, did you realise?
I suppose I'd feel more positive about ChatGPT if it wasn't so clearly destined for customer support, as in, it starts tomorrow with its happy little synthesised voice.
Just saw another bad mathematics example. Someone asked if a flat rate tax offset by a flat dollar amount rebate would reduce inequality and it said no because flat means everyone is treated equally.
Also, from Cheryl Rofer, this: "What I see so far from their release is confusion of a statistical word generator with human thought and the potential to clog the internet with words that may or may not constitute true statements and no way to tell."
I started out much more optimistic on the internet: Wired subscriber, Usenet (admittedly late), blogs and the rest. And then we got Facebook, MAGA, weird right wing tribalistic news sites, and if not full societal atomisation, then at least a good degree of it. And really, really shitty and ugly advertising everywhere. So maybe ... just maybe ... scepticism is the way to go from here on out. The potential for horribleness is real.
Some of the internet is OK. Wikipedia is actually pretty decent. Like Cheryl, I'd prefer not to let ChatGPT on there just yet, tbh.
107.2 describes my general trajectory as well.
I likewise identify with 107.2.
If you ask ChatGPT, it is rather skeptical of the ChatGPT model being able to evolve into general intelligence, but it doesn't rule it out.
The Aperceptive link in 12 was super-helpful to me in explaining the skepticism about developing general intelligence from a Large Language Model.
Wikipedia is amazing, but yeah there's fewer and fewer good things and they're getting harder and harder to find. At least there's still celebheights.com
"For example, it will cheerily tell you that Paris is _in_ France and therefore there's no distance between them, but then it insists that Reunion is remote from France by this many miles. And that your foot both is and isn't part of your body."
Both of these sound like perfectly good answers that a human might give. Reunion is part of the French Republic but a human would understand that when we say "France" conversationally we mean the territory in Europe and Reunion is thousands of miles from that.
And your foot is part of your body as in your physical form, but there's also a sense in which we use "body" to mean just the torso and your foot isn't part of that.
The number of Americans who know that Reunion is part of France is not distinguishable from zero if you surveyed the general public.
Nor is the number who have even heard of Reunion, honestly.
Reunion is the place to be for dodo hunting.
This might be relevant: https://jabberwocking.com/marketing-in-the-era-of-gpt-4-is-a-doomed-occupation/
KDrum was a marketing VP at a tech startup, back in the day. It's possible that maybe his attitude towards
tech is .... biased by his career experience. Maybe? I know that my own experience (most of my career
spent debugging other people's shit) has influenced the way I see new tech: always looking for the technical
bugs. And my lack of experience with people and human systems makes me prone to not see the social
and societal impacts of new tech the way that people with that experience can.
It's just a thought: maybe he's upselling AI (and self-driving) b/c that's what he's naturally inclined to do.
111: 'How far is Alaska from America?' I'll be back shortly.
"Alaska is actually a part of the United States, so it is not located "away" from America. However, if you are asking about the distance between Alaska and the continental United States..."
OK, have teased GPT-4 a bit more; it gave me this:
"Your foot is directly connected to your body and is not a separate entity that can be measured in terms of distance from your body. It is attached to your lower limb, which in turn is connected to your torso via your hip joint. The exact distance between your foot and the rest of your body will depend on factors such as your height, leg length, and the position of your foot relative to your body."
It is not tracking its own usage (sense of terms) across adjacent sentences.
"Your foot is directly connected to your body and is not a separate entity that can be measured in terms of distance from your body. It is attached to your lower limb, which for now is connected to your torso via your hip joint. After the accident with the mower, the exact distance between your foot and the rest of your body will depend on factors such as the horsepower of the engine of the mower and how brave the neighborhood cats are."
Great, we get a computer outputting "The leg bone's connected to the knee bone..." and everyone is losing their shit like it's the second coming.
It is not tracking its own usage (sense of terms) across adjacent sentences.
Interesting -- contrast with the post in 67.
||
My eight year old just asked me what an ashtray is.
I can see why he didn't know, but it still is amusing me.
|>
When I was 8, in school we made ashtrays to give to our parents.
If not for the heroic efforts of OpenAI to limit the awesome power of their text generators, we will all be sacrificed at the altar of an ashtray-producing intelligence, just trying to impress its parents and make them proud.
They wouldn't give us cigarettes to test with, so I think the gaps in the side that were supposed to hold the cigarettes were always the wrong size on the ashtrays I made.
112- According to Nate Silver that means there's a reasonable likelihood that Reunion is not French.
I overheard a non-native English speaker at work recommending using AI for emails to clients. "I used to use ChatGPT, but now I use Bing. It's much better, much more polite!" So yeah, I guess that's a use case I hadn't really considered.
Second, a friend just announced that her husband, a lawyer, won an award for incorporating AI into writing routing legal documents. So lawyers at their firm are freed from drudgery to do the challenging and meaningful legal work they love . . . they handle contracts, with a specialty in professional sports . . .
I had a workmate from Reunion one time. It took a while for me to figure out he was from Reunion because he always identified himself as French.
Ogged,
It's late in the thread so you probably won't see it, but ... could you post the question and answer ? Just so we can judge whether it was actually hard or not? [and the quality of the answer?]
Its like when you meet someone who tells you they are Greek and it takes you a while to figure out they are from Cyprus.
Speaking of confusion, I just learned that there's a thing that exists and is called a "raccoon dog." I thought that was something you only saw if your were an earthbenber or something.
They're just Tanukis, they're in Mario 3.
I missed most of Mario. I played Donkey Kong in arcades, but didn't have a Nintendo system until the Wii. Which was great, but not as Mario-centric as I think the earlier ones were.
Well if they can fly by spinning their tail that would definitely make it easier for them to spread COVID.
They can't fly but apparently they can spread covid.
127.2 Back around the time I started commenting here, and well before probably, I would joke that the Maryland Court of Special* Appeals had a couple of macros: control P for error assigned was not preserved below, control H for harmless error. These bots could very well be programmed to do this exact thing.
* What is special about appeals to that court is that they are completely ordinary, and not special in any way.
This thread is amazing, this blog is amazing.
I was JUST wanting to read about and talk about my doubts about AI. And here comes this blog.
The story about going through the spider cave is like a metaphor for human existence.
I swear I am not high. (Very glad not to be as tge spider cave story would be more unpleasant to read if I were.)
This thread is amazing, this blog is amazing.
I was JUST wanting to read about and talk about my doubts about AI. And here comes this blog.
The story about going through the spider cave is like a metaphor for human existence.
I swear I am not high. (Very glad not to be as tge spider cave story would be more unpleasant to read if I were.)
I have to admit that this is an amusing use of chatGPT in an article.
||
that deep well surrounded by huge ugly buildings and sooty factories, spewing rust from their chimneys and roofs and walls, staining the water with sulphur-yellow liquid, a filthy sewer filled with empty cans and rubbish and horse carcases, dead dogs and gulls and wild boars and thousands of cats, stinking ... A viscous, turbid mass, teeming with maggots.|>
I guess that should have gone in the other thread.
143: why didn't you tell me you were visiting Columbus?
||
Can anyone in California comment on how expensive San Diego is compared to the Greater Boston area? I know it's less than SF but not sure how it compares to LA and Boston. A company contacted Tim about applying for a job there.
I suppose I would rent out my house rather than try to sell it right now and then rent out there if we considered a move.
|>
145: I was going to, but it seems the tourism board is leaning on LLM for its copy.
I see the Unfoggetariat decided to go with "this basketball can't even comb my hair." That's fine. I think right now it's more relevant to programmers, both because it's in their field, and code is so structured that it has an easier time doing things that make sense.
could you post the question and answer ?
There are probably too many identifying details for what's an open source project, but basically: given two tables with a many-to-many relationship, what's a good way to set them up so that we can query what the relationships were for any given record at any given point in time?
Ogged, have you been reading Apperceptive?
That's very smart. I had an argument with someone a few years back about "brains in vats," where I was telling him that a brain in a vat would quickly go insane; I think some people really think brains are microchips. But I would like to hear the robust case of things like chat GPT being "dangerous". I see that word a lot and I'd like to have it spelled out. Is it just that it makes stuff up and people don't realize that? Or something more?
You can't let the brain figure out it is in a vat is all.
148.last
Here you go (I honestly thought this guy was doing a bit until I peeped his bio):
https://twitter.com/michalkosinski/status/1636683810631974912?s=46&t=nbIfRG4OrIZbaPkDOwkgxQ
146: I'm in California, but I don't have special knowledge of San Diego. I know it's a lot cheaper than San Francisco.
I looked at Zumper, which tries to average listings by # bedrooms by city, and found the following for 1-bedrooms:
SF: $3,000
Boston: $2,700 (fell recently from more like $3,000)
LA: $2,400
San Diego: $2,350
San Diego is close to the starting point of the Pacific Crest Trail, so that's a plus.
I spent most of the week in San Diego, arriving home late last night. It rained so much I was thinking about building an ark: it's not usually like that. My sense from my daughter's various efforts at house hunting there is that it's cheaper than Boston or LA, but still quite a bit.
The house next to me is going on sale soon. I don't know what the price is, but I expect will be under $300k
148: I think it's the Rofer thing of the textual world getting slowly filled with plausible-sounding fluff to the point of unusability. Now maybe people will develop tools and skills to help them navigate that world - back to the books! - but that is hardly an unalloyed benefit of all humankind scenario.
I see promise for RPGs though!
It's been a big year for shaped charge penetrators.
She used to live in OB, and then Point Loma, but is now in Chula Vista. The housing economies of those various places are as different as various communities in greater Boston. What strikes me as really different from greater Boston is how different the climate is from one SD area community to another.
I think it's the Rofer thing of the textual world getting slowly filled with plausible-sounding fluff to the point of unusability. Now maybe people will develop tools and skills to help them navigate that world - back to the books! - but that is hardly an unalloyed benefit of all humankind scenario.
Yeah, and we're seeing increasingly worrying signs of people taking the output of these things seriously and not double-checking it. It's kind of painfully on-brand for people like Tyler Cowen who are looking for engaging counterintuitive hot-takes to do this. And then once this stuff is out there it just becomes that much harder to find actual information about anything.
When my aunt and uncle sold their house in San Diego, they got a bunch of money. Apparently, they lived on the right kind of rock to live on if you wanted to minimize your risk in an earthquake.
159 more Doctorovian enshittification
I'm thinking the internet getting too shitty to use is probably the best we can hope for.
I don't know what the price is, but I expect will be under $300k
Average house price around here hit $298K last year. A lot of those are going to remote workers because its hard to afford that much on a local salary.
Meanwhile, the rental vacancy rate is 0.6%.
You miss 100% of the apartments you don't rent.
In better news, the ADU ordinance I asked for is finally making progress through the pipeline. At this point it looks like it has everything I want except it does require each ADU to have a parking space.
166: good for you, even if the parking space is frustrating.
166: Nice.
California made ADU parking requirements conditional on proximity to a bus stop. Could you do something like that?
168: I doubt they have enough bus service for that to make much difference.
The house by me has two parking places and is three blocks from a bus stop served by the most frequent bus in the area.
We did a broad expansion of ADUs a little while ago. It's a good policy to make a little dent in the housing problem without too huge a political lift.
155: imagine, an Internet with a lot of spam and bullshit. This is something I really don't see; there is already vastly more crap on the Internet than you could ever possibly read, the cost of generating more is pretty much zero, so an expensive way of generating more seems, idk, crap?
173: sub "scam" for "spam" and that applies perfectly to crypto, too.
I downloaded Woebot and have tried it a couple of times during moments of anxiety and sadness, and HOLY SHIT is it ever not helpful. It is honestly so hilariously terrible that I would laugh, if I could.
Provincial happy political news: Michigan added LGBT folks to existing protected classes and just repealed right to work. Yay, nonpartisan voting maps!
I've had trouble finding a therapist and I read about Woebot and thought, well, it can't be worse than nothing. Wrong! It is worse.
176: That's great. I'm surprised Michigan ever had right to work.
174: scam or spam? Take a bite and find out!
The article on raccoon dogs in the NYT, answers the question we've all been afraid to ask - "Can I pet a raccoon dog if I see one?"
179: Republican legislature and lots of people were jealous of the "lazy" autoworkers with their fancy breaks and high pay. Plus, I guess if you want to attempt to curtail union power, MI is a good place to put your efforts.
181: So, can we?
183, 184: I can tell you really want to.
Sadly, the NYT says it's not a good idea.
The killjoys on the NYT say "no". And the RSPCA says you shouldn't keep one as a pet. They probably aren't that enthused about the Chinese keeping them in fur farms either, but were too polite to say.
They are monogamous. I wonder if couples eat together every day.
They might be lying about the monogamous thing.
The raccoon dogs or the NYT. I don't trust either.
Reading 166 is an interesting puzzle for those of us who are vague on the difference between ordinance and ordnance. But now that I'm past that, good work!
In honor of 176, I'm watching The Simpson's where Homer runs the union.
"Uh Yeah" is going to be my go-to answer for "Did you find the bathroom?"
142, sorta: One of the things in the article from The Nation (which I presume is ok because it doesn't involve Russia) is that in the lawsuit between publishers and Open Library, publishers are claiming that ebooks are fundamentally different products from books.
I wonder, do publishers make that same claim when negotiating contracts with authors?
Surely Open Library has pursued this line of questioning in their defense? Or is it the kind of thing that is too askew for lawyers engaged in researching the details of their own case?
@177 Tell me what is wrong with woebot. I can pass that info along to people there who would likely appreciate customer feedback.
You can tell them I appreciate the name.
What about a Corsican catfox? https://www.geo.tv/latest/476812-corsican-cat-fox-confirmed-as-unique-species
Thanks but I just ordered a pizza.