It feels a bit like the early days of the web felt.This is accurate. I don't know where this goes, but it'll be really, really interesting. I have a friend who's been "interviewing" ChatGPT like it's a candidate for a job at his (tech) company. And it's done really, really well. To which my response has been "that simply shows that your interview method isn't very good". But maybe I'm wrong.
I was talking about this earlier, and what it produces is precisely bullshit by Harry Frankfurt's definition -- utterances produced without any regard, one way or the other, for their truth or relationship to reality. It's a funny thing to have automated.
Some of the results are terrific, though -- you should try getting it to write sea shantys about your profession life. Those have been turning out great.
Nothing will top translation party.
And it can write (some kinds of) code for you! Which given its difficulties with some basic word problems is odd.
I instructed it to write a Ronald Reagan speech against car dependence with a folksy analogy. It read more like a student essay, but it did analogize cars to cake (in that it's good but too much is bad for you).
Incredibly, translation party is still up but if it's re-running the prompts, the translations have improved so much that this thread may not be as funny as it was in 2009.
Presumably someone has asked the chatbot to create poems in the English language modeled after real poems that end in "Fuck you, clown!"?
I really would hate to see the guy who writes the sea shanties for our office lose his job due to automation.
This guy's tweet--that GPT3 is like having an infinite number of really dumb employees--was funny, and then I got to thinking about what my computer actually does: https://twitter.com/ZackKorman/status/1599317547509108736
Yeesh. Anyone with a basic understanding of molecular biology- each amino acid is encoded by three nucleotides- would do better at answering this. It seems great at giving supremely confident and totally incorrect answers. (Was it trained on tweets and online comment sections? Probably not because it doesn't answer half the queries with Nazi propaganda.)
it seems like a combination of a good NLP algorithm, Google calculator, and Wikipedia.
Probably not because it doesn't answer half the queries with Nazi propaganda.
That's probably the biggest function of its appropriateness filter, which people have also found ways around.
A work colleague has been setting it JS coding problems--asking it to solve some of the basic things we actually do at work, not setting it the sort of puzzles people get set in interviews--and the output is pretty decent.
I was talking about this earlier, and what it produces is precisely bullshit by Harry Frankfurt's definition -- utterances produced without any regard, one way or the other, for their truth or relationship to reality. It's a funny thing to have automated.
It's interesting though to consider how much of human utterance is bullshit in this sense. All fiction is bullshit, for a start. All Carey ritual communication is bullshit.
Right, this is why it works, and why chatbots all the way back to ELIZA work - there's a lot of predictable, repeating structure in language itself, which makes it possible to guess plausible text. (Similarly, if you're learning a new language, one of the first things you'll be taught are help strategies, things you need to respond validly to other people and explain that you didn't understand what they said or don't know the meaning of some word.)
I would say fiction isn't bullshit -- that is, the one sentence I gave pointing to Frankfurt's definition might fit fiction, but to the extent it does that's bad drafting on my part. In a fictional context the concept of bullshit doesn't squarely apply. Lots of the texts produced by AI are purporting to be non-fiction, and to that extent are bullshit; the ones that purport to be fiction seem to me to have a relationship to human-generated fiction analogous to bullshit.
I love the Bullshit reference; spot on for people who are familiar with it.
And this from 14 seems like the key,
there's a lot of predictable, repeating structure in language itself, which makes it possible to guess plausible text
ChatGPT is just astoundingly good at finding and reproducing those patterns. Of course you can find where it breaks down, but even at this relatively early point in the development of these tools, a lot of the code it produces is helpful, its translations are about as good as the other online systems, and it's often just delightful.
There's a book about reduced contexts for speech that I liked a lot, Forms of Talk by Goffman. He considers radio talk (superficially open and spontaneous, isn't) at length. That and say QVC seem like places where this might work-- endless praise of cubic zirconium made of kitchen gadgets. Punditry also maybe, could an automated Michael Tracey defeat 1000 automated 5-year olds?
But could an automated Maxine Waters beat a 1,000 Michael Traceys?
Could 1000 automated Michael Tracies defeat an automated Maxine Waters?
All Carey ritual communication is bullshit.
I know you're not talking about my Mariah.
7: I did that this morning (using asterisks to avoid the filter) but it treated the "Fuck you, clown" as diegetic, adding after it one final verse about the jarring intrusion.
21: I'm pretty sure that all she wants for Christmas isn't me.
I asked it my favorite brain-teaser, about someone arriving at a train platform at a random time but 80% of the time boarding one of two equally-frequent lines. It got it conceptually right but flubbed the illustration:
This situation is possible because the trains on line A and line B may have different schedules, such that one line is more likely to arrive at the station platform first. For example, if line A has a train that arrives at the platform at 7:35, 7:45, 7:55, and 8:05, while line B has a train that arrives at 7:40, 7:50, and 8:00, then line A is more likely to arrive first when the man arrives at the platform at a random time between 7:30 and 8:00. Over time, the man will tend to board line A more often because it is more likely to arrive at the platform first.
It would have been completely correct if the example had been line A coming at 7:38, 7:48, etc.
Good thread https://twitter.com/studentactivism/status/1599753552401813504?s=46&t=lzdbu8uahzaQ1N0HkDb7sw
I just asked it to come up with an SF plot synopsis with at least two twists. It was pretty by-the-numbers, but the twists were real twists (even if it needed an extra push to reveal #2).
Nice post from someone using these tools to learn a (difficult) programming language. Note that "hallucination" is the term for these models making stuff up that's not connected to reality. https://simonwillison.net/2022/Dec/5/rust-chatgpt-copilot/
The existence of a viable Republican party demonstrates that we as a society have inadequate bullshit filters, and ChatGPT is a nuclear-powered firehose bullshit generator, so we're pretty fucked.
It reminds me slightly of Douglas Adams' "Reason" decision support software from "Dirk Gently".
I remember that. It was done for the C.I.A. or America in general.
Some of you are linguists right? https://maximumeffort.substack.com/p/i-taught-chatgpt-to-invent-a-language
34: Interesting! I think what all of this mostly demonstrates is just that lots of kinds of writing, even pretty complex types, are highly formulaic and with enough of a database the AI can imitate them very well.
Yeah, I think it's finally time to hang up my doggerel hat. The AI is a way better poet/lyricist than I am. I
I wonder if it can make infinite versions of "This Is Just To Say."
BUFFALO BUFFALO BUFFALO
Update: its version was horrible, and even coaching on what was wrong did not improve it - it insisted on lines being complete (if short) clauses.
Further update: I asked it to critique the difference between the original and its version, and it gave something quite sound. I asked it to rewrite incorporating that critique, and it got a lot closer, but inexplicably threw plums in near the end (which were not part of its version).