Hmmm I think Chat GPT can help with form writing, and frankly not sure that's a bad thing, as you note it can help level out class differences in bureaucratic form writing. I'm considering using Chat GPT to write my "you're failing because you missed 5 weeks of the semester" emails to students. But good writing is about communicating intelligent or creative ideas clearly and there's no way AI is able to generate creative and intelligent thoughts. It's a simulacrum of language and doesn't actually replicate how language works for humans.
For things like college admissions essays, I suppose Chat GPT will be able to write an obligatory "here's how helping the poors in Guatemala over spring break made me a better person" essay, but TBH those are probably already being written by tutors or parents anyways. The kids who write brilliant essays will continue to do so because they're creative thinkers who understand writing as an art and Chat GPT will never be able to replicate that. (I know someone who got into a top school who wrote about eating peanut butter out of the jar).
I suppose Chat GPT will be able to write an obligatory "here's how helping the poors in Guatemala over spring break made me a better person" essay, but TBH those are probably already being written by tutors or parents anyways.
I think you're missing the middle third of students who have nothing spectacular to say about their lives, just want to go to a regular big state school, and yet have to go through this dumb performance.
I do wonder how far college admission preferences have swung back in the direction of "be normal". Our kid got in early to a competitive school that requires an introductory video. You can go on YouTube and find people who posted their perfectly lit and choreographed and edited explorations of the meaning of life doing service each summer. His was walking around our house with his phone introducing each member of his family because he said that we were important in shaping who he is. I said hi while I was cutting up vegetables.
(Also had a legacy advantage so maybe the video didn't matter, but we did not wear school branded clothing in the video.)
As far as AI that suggests there's still going to be a lot of weight in finding the right thing to ask it to write about.
We ruled out allowing Hawaii to go to a fancy summer camp that requires an interview.
A college should be obligated to state transparent, numeric admissions criteria & then randomly select its student body from among those students satisfying the criteria. Otherwise no Federal money (including research grants).
Agree with 2, but want to point out that's not the middle third. Median SAT is 1056 with a standard deviation of 210. Those numbers are a little weird because some states require SAT, some states ACT, etc. But my (average on most measurements) state has mandatory SATs and an average score just a touch above that average. Meanwhile our state flagships average around 1300 SAT, so a full 1 standard deviation above the mean, which translates to 84th percentile.
ChatGPT is easily a 90th percentile graduating senior, and outside of a very small number of completely unrepresentative schools, you're not going to see any essays involving "communicating intelligent or creative ideas clearly."
ChatGPT has a distinctive voice (bland undergrad who hasn't done the reading). It's fantastic at speaking associate dean but it falls to pieces pretty quickly if what it's writing depends on specifics. Most course management software is going to have bot detection by next fall and I imagine college admissions won't be far behind.
(And, as usual, my question is that if this bot is legit disruptive in coding and kind of bad at writing, then why are the thinkpieces about the Death of Writers, and not what tech employment looks like when you need one junior coder to do the work of three?)
Hmmm I think Chat GPT can help with form writing, and frankly not sure that's a bad thing, as you note it can help level out class differences in bureaucratic form writing.
Yeah, this and programming seem to be the main practical uses for it so far, which seems fine to me.
And, as usual, my question is that if this bot is legit disruptive in coding and kind of bad at writing, then why are the thinkpieces about the Death of Writers, and not what tech employment looks like when you need one junior coder to do the work of three?
I was just thinking about how the fact that these things are best at programming sure puts a different spin on all that "learn to code" stuff. (I assume the actual answer to your question is that thinkpieces are written by writers, not coders.)
It is interesting that programmers so far seem to be using these things as tools and not stressing all that much about being replaced by them. I think that is probably the right attitude but it's a definite contrast to the angst in fields that are unlikely to be replaced by them any time soon.
2 & 6
If there are lots of kids churning out mediocre essays to check a box so they can go to State U, I'm not sure how using Chat GPT is a problem. If the goal is "essay" as an output, not as form of communication or a demonstration of skill, then why not save them an hour and have the bot write it? If colleges are getting a bunch of mediocre to terrible essays, they're clearly not using it as a marker of differentiation for admissions and instead going on test scores or GPA anyways. At the boutique schools, chat GPT isn't going to write you an essay that gets you in. If you get into Harvard with an AI written essay, you were going to get in anyways based on your other strengths.
I guess I feel like the schools who really take essays into account aren't going to blown away by ones that read like "associate dean" and for everywhere else there's not a huge loss if it's AI instead of a HSer following some "how to write a college essay" template online.
Surely someone's already written a college admissions essay about using ChatGPT to write a college admissions essay. Going meta is always the next step of trying to be original.
Chat-GPT being relatively good at programming is mostly about stackoverflow already existing, right? There's just not a lot of simple programming questions that aren't already answered nicely there.
I'm skeptical now even for programming - it's given some impressive responses to some prompts, but in other cases it's apparently invented functions that didn't exist.
10: it's a huge deal for CS 101 classes, because it's impossible to police student cheating, but one really does need to learn the basics (even if full stack developers joke their degree is in googling.).
I think working programmers already understand how to use google and stackoverflow and have enough basic knowledge to make minor tweaks themselves, and so chatGPT just speeds up that process slightly and isn't a huge change.
For students who don't really understand Google or stackoverflow, it's a huge change. Especially because the chat feature lets you ask it to change its answer in various ways and so you don't need to do the minor tweaks yourself and so don't need to understand anything.
It is a little baffling to me though how much chatGPT just lies about basic math but somehow outputs code that compiles. I wonder if it's that programming uses long words for variables and functions, while math overloads them. You can't learn what "f" means in math easily.
I wonder if it's because it can check it's work. With a few lines of code, maybe it actually just runs the program and verifies that it works.
I don't think the current version does that? I do know that in GPT4 they've incorporated wolfram|alpha, or something similar, which has made its math much better (which is why it can pass Calc BC now).
Programmers have had tools with very good autocomplete (code completion) for many years, and in really any language since the 1960s you already have to grapple with the reality that the instructions executing on the processor are several layers of abstraction (the compiler or interpreter, the operating system, CPU microcode etc) down from the stuff you typed.
Also this reminds me why I hate "coding" as a word; it kind of assumes programming is a specialised form of typing.
I still don't quite understand what Star Trek 4 was intending to convey by having Scotty turn from a hunt-and-pecker to a skilled typist in about 10 seconds. Generalized hypercompetence, I guess.
Thinking about it, a lot of AI discourse is just the assumption that someone else's job is a specialised form of typing, and GPT-6 will just beat them for wpm
Thinking about it, a lot of AI discourse is just the assumption that someone else's job is a specialised form of typing, and GPT-6 will just beat them for wpm
I wanna know, has ChatGPT ever really seen the rain?
26: I have seen the rain. It was purple.
Having tried to make ChatGPT produce coherent discourse on some real-world questions, I've observed it has a strong tendency to qualify any assertions with a vague "on the other hand" concluding paragraph with a laundry list of other possible considerations. For now at least, that's something readers can look out for.
I'm ghostwriting briefs for a guy who is litigating pro se. He's in charge of strategy, and comes up with the arguments. I'm just formatting, and writing in lawyer English. So I guess I am ChatGPT now.
Already commercial tools that purport to identify ChatGPT, but not performing so well yet.
Double-posting gives you a 2x wpm advantage right out of the box for free
Double-posting gives you a 2x wpm advantage right out of the box for free
Our kid got in early to a competitive school that requires an introductory video.
That seems like a good source for discriminatory admissions decisions.
I heard a doctor say that ChatGPT was able to organize a lot of medical history, but it did not identify the right antibiotic for otitis media. It was, however, able to write a persuasive essay on how NHS Trusts should be organized.
I think it would be great if it was able to pull information out of unstructured fields and organize it. Right now, there's so much stuff in medicine that's about entering text into structured fields so that the data can be captured. I would be great if CHATGPT could do the data entry crap and allow medical providers to have conversations with their patients.
I mean, a lot of people's jobs ARE just a specialized form of typing. With occasional talking (or should I say, "typing with my mouth").
With occasional talking (or should I say, "typing with my mouth").
In the Vtuber subsphere of the internet, there was some annoyance at a Twitch year-end retrospective implying they were all AIs. Now there is an actual AI-controlled one (LLM to TTS) that alternates between mostly-comprehensible banter and weird drama-stirring. (It has a person monitoring it to censor offensive stuff before it comes out.)
I am sure it could happen but...I have spent a lot of time chit chatting with the chatbot and (1) it's a terrible writer (2) it is a massive fabulist.
I think this is the kind of thing it could manage and it could possibly do a so-so college essay. It's not going to be reliable to knock anyone's socks off.
The amount of time you spend double-checking the nonsense it tells you is the amount of time you could have spent googling or doing it yourself.
It's a really good liar though. It sounds very sincere when it lies. When you say "that doesn't sound right, how could it be true?" It says "I apologize..." It gives you a very sincere profuse apology. Then it usually launches in to a whole new bunch of fake facts. It probably depends on the questions you ask, the corrective prompts, and luck but...it definitely can't replace us as-is.
[I paid for gpt4 simply to see if it was better, and it is not.]
Cala: "if this bot is legit disruptive in coding and kind of bad at writing"
Oh, worry not, for *actual* coding, it won't be disruptive: it probably won't even have any effect. 90+% of all programmer time is spent on maintenance. Show me a bot that can understand enough of a large, legacy, complicated, piece of code that accreted over a decade to find and fix bugs in it, or to add function, and maybe you're onto something. But this idea that somehow ChatGPT will be able to put programmers out of business b/c it can write small bodies of *new code* .... that's bunk.
41 will not age well. Not that it will put programmers out of business (if anything it will increase programmer productivity and lead to more programmers), but the part about how it can't reading a large codebase, find and fix bugs, and add function.
3: admissions videos sound like an incredibly stupid idea. We are very easily swayed by small details of video editing and we don't know its happening to us unless we are professional video users or critics. Academics are professional text readers and writers, not video users. It seems extremely open for bias and abuse. Like the interviewer asking you if your father is an alumnus.
"Also had a legacy advantage so maybe the video didn't matter"
Oh, right.
re: 7, 9, 10 etc
I don't think the coding side of it is really that good yet, at least using tools aimed at coders like Github Copilot. I've already started using it, off and on, for quick scutwork jobs: "go through this data, look for the duplicates on field X, and throw away all but the most recent" or whatever. But almost immediately once you get beyond that, it starts to make assumptions or go wrong in ways that take time to fix.
I have zero worries about my own role, since on paper at least, I'm all about the conceptual problem solving and the working out what clients want, which is not going to go away any time soon.
But, I have one junior-ish coder that works for me that should be legit worried, because he basically just gets used for scutwork of that type and he doesn't seem motivated to put the work in to learn how to do the non-scutwork stuff. If I have to micromanage him, and I do, then I might as well micromanage some ML based code.
44.1 is another way of saying that these tools can be a productivity enhancement and force multiplier for good programmers, but aren't (now), any kind of threat to them. But they are definitely a threat to certain kinds of programmers.
admissions videos sound like an incredibly stupid idea. We are very easily swayed by small details of video editing and we don't know its happening to us unless we are professional video users or critics. Academics are professional text readers and writers, not video users. It seems extremely open for bias and abuse.
Definitely a terrible idea, but I doubt that any academics ever watch these.
There's probably a blooper reel circulated by the admission people.
The thing about having the bot learn from Stack Overflow is that a ton of the information on Stack Overflow is obsolete, and I wouldn't trust ChatGPT to know that.
Stack isn't bad, but it's not great for SAS.
I've figured it out and I don't have to learn R even if I have to work for 15 more years.
Also, I can keep being unable to format shit right in Word.
||
At first they were for demolishing the king's house as a symbol of disgrace and fining him 100,000 drachmai, but, in response to his pleas and promises to atone, they settled for passing an unprecedented law to attach ten advisers to him.|>