Yes. Microsoft and Adobe are the worst.
It's not just unnecessary. If I open a graph I made, I am actively impeded by an attempt to edit it. It's supposed to be unfucked with.
These 1.5 million servers, running at full capacity, would consume at least 85.4 terawatt-hours of electricity annually--more than what many small countries use in a year, according to the new assessment.
I mean, yes, no doubt this is technically true, but many small countries are REALLY small. A major global industry probably should use more electricity than the Turks and Caicos Islands.
(Data centres already use 250 TWh per year.)
And you have to read quite far down to learn what Alex De Vries thinks is actually likely to happen. The worst case, he says, is a tenfold increase in energy use by data centres. And since data centres represent 1% of total global electricity use, that means his worst case is "global electricity demand goes up by 9%, over some unspecified period of years". And he is emphatic that he thinks this is very unlikely to happen - growth is actually going to be a lot slower.
Global electricity demand is rising at 2-3% per year anyway, and that'll speed up as the world moves away from fossil fuels - IEA reckons 3.4% a year over the next three years.
Adobe is AWFUL. I am frequently opening PDFs and keyword searching for specific words/phrases.
There is no universe in which AI would be helpful to me in this context (legal questions that absolutely require me to read the original text in situ) and yet that damn AI "helper" constantly interrupts my screen and tries to insert itself. It's as bad as Clippy.
And yeah, my understanding is that using AI is about 15x as much energy as a standard Google search, but training AI models is where the massive energy demands come in.
I have started using Edge to read codebooks that are stored as PDFs. It's not as bad.
We're probably at a sour spot right now for the amount of usefulness produced per kWh consumed. I would expect usefulness to increase over time, and much faster than power usage, but, yeah, this is a new energy-intensive industry where the energy costs are pretty opaque. Frankly this has always been true of data center computing and we're lucky the big companies involved give any shits at all about sourcing green energy. We're also probably lucky the industry trend has been towards a handful of big companies running most of the data center capacity in the world relatively efficiently, instead of untold thousands of smaller, less efficient mom and pop operations. (E.g., my grad school advisor just had ten PCs humming in a closet instead of renting CPUs from Amazon.)
Data center operators are huge buyers of renewable power (there's a reason they love the Bonneville Power Administration) and investors in things like wind farms.
4: a training run is a big lift but it's a one-off project (ish) while inference operations, although lighter, are much more common. A further twist is that the real power hog is I/O rather than computing and the inference side is I/O bound. This is why the new GPUs contain great slabs of high bandwidth memory stacked up over the compute die.
I hate that Teams has a horrible chat bot trying to offer to help me with things all the time so that I won't contact a live person in tech support. (I sound like my grandmother complaining about not being able to get service in a department store in the 80's.). I once asked it how I could make it go away. One day, I was so annoyed that I filed a ticket for human review asking how I could disable the chat bot or hide it. There is no way.
I'm actually worried about how people are thinking about using it in healthcare.
People who make me use Teams are the worst. Zoom is just better.
Data center operators are huge buyers of renewable power (there's a reason they love the Bonneville Power Administration) and investors in things like wind farms.
Quite. By far the most important point is "where are all these new data centres going to be?" France or Canada, it isn't really a problem. Japan or China, it's a big issue.
I'm not enough of an expert in computers to know if this is true, but I think AI involves taking my processor power and using some of it to run statistical models in the background. I want all my power for the statistical models I'm intentionally running.
We're probably at a sour spot right now for the amount of usefulness produced per kWh consumed. I would expect usefulness to increase over time, and much faster than power usage
I think that's happening now (assuming that price corresponds to power consumption, which I think is a fair assumption).
In short, in recent months all three companies have delivered aggressive price cuts but only modest performance gains.
Interesting sub-question is how much extra power the PRC will need to burn to run competitive models without the latest silicon.
It's an explicitly hostile act to put in a new "feature" that is still in development and make it impossible to turn off.
In short, I wish I had pirated more Microsoft software back in the 90s.
I wonder if Japan will start making more eyes within 50 years in Sakhalin or even Kamchatka. Lots of cold empty space to put those servers in.
They're going in the Aleutians. Teo is on it.
So I agree that LLMs take way too much power for their usefulness, but I have a hard time seeing that it's a lot of power relative to our grids.
Probably the whole LLM thing comes crashing down in a few years just like blockchain and VR and NFTs did before it - remains a niche enterprise but not The Future of Everything - and then we've got a lot of cheap data centers and a buyer's market.
I wonder how long companies will be willing to spend so much on energy if they aren't getting a return on their "AI" investments. I haven't looked closely, I'll admit, but it seems like there's a lot of money in AI that's still looking for a profit model.
I assume as long as they can piss me off, they'll continue.
19: You've seen Tweety's post on the mentality, right? There has to be a Next Big Thing because the universe owes it to investors. But yes, it will still have to shift at some point.
Sorry but AI will never be able to improve upon Edward Yang's masterful depiction of three generations of a Taiwanese family.
So I agree that LLMs take way too much power for their usefulness, but I have a hard time seeing that it's a lot of power relative to our grids.
Yeah, the energy thing specifically seems annoyingly oversold to me. AI does use way more energy than regular computing but that's starting from a low baseline relative to most other things. I saw something recently about how each AI query uses three Watt-hours of electricity. That's a lot more than a standard Google search, and it's basically all waste because it doesn't add any value (indeed it often subtracts it), but that's also not a lot of electricity! The average US retail price of electricity in 2022 was 12 cents per kilowatt-hour, so an AI query would cost 0.036 cents (actually less because the companies doing this are probably paying industrial rates).
But 19 is really the main point. I still haven't seen any real evidence that any of this is profitable for anyone. It seems like a bubble that will burst soon.
Don't bubbles bursting mess with lots of people who had nothing to do with the bubble?
I'm torn on this, because there's so much AI bullshit and useless features, but I still think the underlying tech is amazing and important. Someone will be left standing.
Also, the ethos in tech is very much "let's try it and see what happens" which means that consumers get exposed to a lot of stuff that no one is sure is a good idea; we're basically test subjects all the time.
Exactly. And I'm not happy about that and actively looking for ways to get back at them.
I don't think it's a bubble in the sense that it popping will crash the economy. Much like with NFTs, the big firms will shutter their big ambitious projects, a smattering of startups will fail, and everyone will move on to other pursuits. The uses of LLM will shrink down to the 1% of use cases that are plausibly beneficial already, plus non-consumer-facing R&D.
Going back to electricity, the combination of renewable energy boom and battery production boom means a lot of excess capacity may be on the horizon.
I've been using AI to develop ornate SQL queries that would take me hours and hours to put together on my own. I describe the output I want, the schema I'm using, and some basic hints about how the data is connected, and it generates code me to test, and then I keep refining my request until I get something that works. On its own, it still gets a lot of shit wrong (like I have to consistently remind it not to compare a VARCHAR with an INT), but as a tool in my toolbox it provides a massive improvement to my productivity at that particular task.
27: Right, sometimes I see people saying "We didn't even ask for this!" and, well, they try stuff out all the time to see how people like it. If it had worked as advertised people would have liked it as a result of being exposed to it and seeing the use. But it's still a shanda that (1) it seems to have been implemented in a rush of enthusiasm / FOMO at the expense of any real validation and (2) there's no overhaul in response to the backlash and many, many huge problems identified (glue pizza).
I think it's mostly rushed because they are trying to normalize copyright infringement before anyone can stop it.
it still gets a lot of shit wrong ... but as a tool in my toolbox it provides a massive improvement to my productivity at that particular task
Exactly my experience. It's become an essential part of my workflow and giving it up would feel like going back to the dark ages.
shrink down to the 1% of use cases that are plausibly beneficial
I do think this undersells the potential. Just think about the ability to understand natural language. It's like we leapt 10 years into the future overnight (I know they've been working on these models for a long time). The ability to give it vague natural language ("what's that Orson Welles European noir?" "what was I looking at a few weeks ago that mentioned a heart disease study?" and get useful information back is incredible (I don't think these applications exist yet*, but that's partly my point--there's a lot more places I want "AI" capabilities).
* Or, in Microsoft's case, they're probably a privacy nightmare.
The thing that bugs me about AI (and this is the way of the world) is that the benefits seem to be about to accrue to the rich.
I don't want AI-generated articles. I don't want AI therapists or questions about end of life goals handled by AI, because I think it's dehumanizing and these are basically human social issues.
Someone basically said, "why can't we work on getting AI to do our chores so that we have time to create art?" Like if a robot could organize my closet and put away my clothes for me, that would be great.
And if it eliminates a bunch of tedious jobs, maybe we should confiscate some of the money of the top 1% to create a universal basic income.
What if all the language ability is just copying some Word file that someone else wrote?
"what's that Orson Welles European noir?"
Which one? At least four would fit this description and google would serve just as well.
It's like we leapt 10 years into the future overnight... The ability to give it vague natural language.... and get useful information back is incredible
I lose you in the last clause there. The errors in practice seem so common and so hard to reliably notice on skim - because we associate well-formatted text with correct text - that the link to usefulness is attenuated to the point of uselessness. If I were coding with this tool I would see it as a mild time-saver but feel obliged to check over all its output like a hawk, like a college intern had just done it and could have messed up any number of things in ways I might have trouble predicting because of my inadequate mental model. (Not a coder by training, but I do some depending on your definition.)
39: This kind of reminds me of when everyone was psyched about Theranos, and it turned out they were just sending the lab work out to somebody else.
I think natural language processing could be great. Right now, for example a lot of health care quality metrics depend on doctors putting something into a structured field (a depression screening or a range of home blood pressure readings). Getting computers to support people doing their jobs instead of training people to fit their personal way of working into standard workflows would be fantastic. Because fighting against people's natural way of working is a constant struggle.
Oh yeah, assembling dictation into medical notes (some formatted as legible text, some structured into fields) is something I think my org is pursuing and could be relatively reliable because there's so much raw material to train on. Although the Epic/Cerner duopoly means we may not have a lot of recourse if it's done badly.
I want to have AI built into a cardboard box so that I can shit in the box and it will tell me if I have colon cancer. Right now, I have to shit in the box and FedEx it to a lab. In addition to the delay, the usual FedEx pickup place at the convenience store near me won't take the box because they know it's got shit inside.
At least four would fit this description and google would serve just as well.
Would it, though? It feels so cumbersome to google and sift through the results. You can just ask that chatbot, what's the Orson Welles European noir and it says "you're probbly thinking of the The Third Man" Yup, I was. I can follow up asking if there are others, and I get a nice list. This is just better than googling. Unless I'm asking about relatively obscure or difficult topics, the hallucinations just haven't been a problem.
Anyway, I'm pro, lots of people are con, and we'll see how it all shakes out. I could be wrong!
Hard for me not to see 'AI' in similar terms to cars: i.e. a case of demand being manufactured along with the product, in case people decide not to bother. My last MacBook lasted nearly ten years, maybe? The current one looks set to go even longer. I love it plenty; quite happy to keep it.
It's sad also that 'AI' is coming along at just the same time that people go really sour on software in general; the endless scrolling, potential psychological or cognitive harms, horrible effects on social cohesion, etc. You'd hope for a certain reading of the room, but no. Possibly compounding the suspicion that the computer industry as a whole now harbours some fairly dark impulses.
And then the energy. Renewables promise abundance; in the longer run it's good news, but there's a transition to manage, we have to get there.
So it's a thumbs down for 'AI' from me!
46: This blood test sounds better.
https://www.webmd.com/colorectal-cancer/news/20240314/new-blood-test-colon-cancer-highly-accurate-trial
Try that link instead.
LLMs are very good at quickly generating large amounts of highly formulaic writing, including types of writing that a lot of people apparently didn't realize were highly formulaic before. That's not nothing! There's a use for that, and some applications that may even be enough of an advance to be profitable for someone (as in the coding examples ogged and Spike mentioned). So I don't think they'll just disappear entirely, but I do think the hype will fade and the current trend of sticking them in everywhere is a bubble that will burst at some point relatively soon.
("AI" is a broad category that includes lots of things besides LLMs, so I think it's worth being specific here.)
LLMs are very good at quickly generating large amounts of highly formulaic writing
Yes, but I think that's the wrong way to think about them. They're really great if you think of them as "intelligent search." Take my vague natural language query, apply it over a vast corpus of natural language input, and give me a natural language summary.
I totally get that people are using them to write papers or whatever, and I also get why that use is salient for humanities types, and I also get that that's the party trick that gets people's attention, but it's really taking a vague input and giving back a reasonable response that's amazing.
53: Sure, but don't you think that use case which will make it harder for artists and creataive types to earn a living is the one that will win out?
They're really great if you think of them as "intelligent search." Take my vague natural language query, apply it over a vast corpus of natural language input, and give me a natural language summary.
It's not a reliable summary, though. It's a document in the form of a summary of that information but it isn't necessarily accurate. Maybe it's usually accurate enough but that's not something you can rely on unless you already know the information, in which case you don't really need to search for it.
49, 50: I'm mostly in it for the pooping in boxes.
Like it's not actually searching for bits of information the way a Google query does, then synthesizing them into a readable summary. (That would indeed be impressive!) It's simulating what a summary of that information might look like but not creating one.
They're really great if you think of them as "intelligent search."
I think you are misled as to the efficacy of this because you have a good idea what correct answers will look like, and more people are already being badly misled by this assumption it is as reliable as searching websites. Transcribing something that went viral on Bluesky recently:
weird interaction with a student this week. they kept coming up with weird "facts" ("greek is actually a combination of four other languages") that left me baffled. i said let's look this stuff up together, and they said ok, i'll open a search bar, and they opened... ch*tgpt
and i was like "this is not a search bar" and they were like "yes it is, you can search for anything in here"
this kid was extremely combative with me, and i understood why. i was sitting in front of him and telling him that the internet, a computer, technology, all these supposedly authoritative things... were wrong, and that i, one person, was right. he basically *couldn't* believe me.
he decided that i was simply a teacher who'd made a mistake, he could check it, after all! he could look it up! he could find the REAL facts. i obviously hadn't done that, i was just an adult who'd decided i was smarter than him, hence the defensiveness. like i said: i understood
it was so fucking rough. i did my best, but i am one person trying to work against a campaign of misinformation so vast that it fucking terrifies me. this kid is being set up for a life lived entirely inside the hall of mirrors
58: Yeah, I think that's basically correct. The use cases I mentioned upthread don't really suffer from this problem ("what the movie I'm thinking of?" "what was I looking at earlier?"), but if you go in cold, so to speak, searching for authoritative knowledge, you could have a bad time.
And this mirrors a concern in the coding community about these things being useful for relatively experienced developers. but potentially disastrous for more junior ones. I think that's a real problem, too.
Out of an abundance of caution, we should start the Bulterian Jihad.
If I were coding with this tool I would see it as a mild time-saver but feel obliged to check over all its output like a hawk, like a college intern had just done it and could have messed up any number of things in ways I might have trouble predicting because of my inadequate mental model.
At Heebie U, we had a presentation by a guy on how to adjust your classroom to the reality of AI. It was mostly aimed at the humanities. A huge chunk of it was "Create assignments where they go off and use AI, and then in the classroom become critical readers of what they showed up with." It's a whole different skillset from conceiving and writing an essay, and it's fucking tedious drudgery, and it has pedagogical value probably, but good lord, I would stab my eyeballs out if I were a student.
A huge chunk of it was "Create assignments where they go off and use AI, and then in the classroom become critical readers of what they showed up with."
So not just "assume they'll use AI" but actively assign them to and incorporate criticism of it into the pedagogy? That's fascinating but seems like it wouldn't really build all the skills they're supposed to be learning.
Between you and me, I think students are trying to use AI outside of formal instruction. On college tours, the admissions people hinted that they had to disqualify many students on the grounds that their various essays were AI'ed.
"Create assignments where they go off and use AI, and then in the classroom become critical readers of what they showed up with."
I think Ethan Mollick has made that argument and there's some resources here: https://interactive.wharton.upenn.edu/teaching-with-ai/
You can just ask that chatbot, what's the Orson Welles European noir
I would go to IMDB or Wikipedia and look at the Welles filmography, like some sort of caveman. It's not a long list of films. For a character actor with 200+ roles, maybe I wouldn't do that.
I've ended up at a couple of customer service "chatbots" that wouldn't let me enter free text. Instead, I put in canned "questions" and got canned "responses". It was kind of ridiculous. In one case, an ISP, I think they previously did allow free text chat, almost certainly with a person. I assume they wanted to lay off staff and claim to be using sophisticated AI but reliably working AI wasn't available, so it's a fake chat FAQ for everyone instead.
69: A reason it may not be "real" AI.
I'm actively working on projects to use LLMs and NLP to do various kinds of information extraction in the cultural heritage space. Of the "here's a pile of massively unstructured, completely uncatalogued documents, give me a way to find things in it" type of variety.
Some of what current multi-modal LLMs can do is amazing. They are really unbelievably good at extracting text/transcripts from images, even when those images are incredibly bad microfilm copies of handwritten documents from 200 years ago. I've tried the state of the art non-LLM stuff for that for years, and some of it is OK, but it's just nowhere near what GPT-4o (and presumably a ton of other models) can do.
I don't think people who haven't tried them for this kind of task have a sense of how much better they are than the state of the art a year or two back.
On the other hand, with extracting structured data--which is quite a common LLM type of task--I slide back and forth between being quite impressed at the overall quality of the data output, and massively frustrated at how poor the current state of the tooling is, and how hard it is to do things that earlier/simpler tooling made easy.
However, I would say that there's no way, if I was starting some kind of project of this type, that I wouldn't be using LLMs, even if I have to feed it into some human review pipeline at the end, it's still way quicker than getting people to do it, even if we (the institutions I am working for) could afford it, which they can't. If you are some relatively small archive with millions of pages of stuff in grey archival boxes, you just aren't going to manually transcribe and catalog that.
I've also been part of* some work with OCR, speech recognition, etc. for library and archives materials and the "getting text" side of AI/ML is really impressive. But I've also seen a fair number of "turn that text into structured catalog data" where the person promoting it has obvious and hugely consequential errors in their slides and demos. Usually, they've acknowledged the issues and handwaved them away as something that will be cleared up. To be clear, the distinction I'm making is between getting the full text of a book and cataloging the book. Arguably, you might not need to catalog the book if you've gotten all the text, but it depends on the larger context of what you're doing.
*Albeit not directly working with the tools/models myself for the most part.
re: 72
I am directly working on cataloging things (producing properly formatted structured data) and I don't disagree with you.
Currently, on one project I am working on, I'd say we are getting about 80% accuracy on the catalog data (for a particular type of material), but when it's wrong, it's sometimes wildly wrong or wrong in ways that are deeply important: when it hallucinates an identifier, or gets the identifier slightly wrong, so it's matches with entirely the wrong thing, for example.
Ed Zitron's latest newsletter on the latest Goldman Sachs report on AI is very good
https://www.wheresyoured.at/pop-culture/
62: It's also counterproductive. The analogy I use: it turns out that my teachers were wrong and I always have a calculator with me! But we still don't teach kids to add by saying "here is the plus button! Good luck!" For someone with good writing and critical thinking skills, AI is a productivity boon. If you don't have that, you're in rough shape and if you spend your college education proving that I can replace you with free software, what are you doing?. The easiest way for me to deal with it as an instructor: a ChatGPT essay is the new C-. I would rather stab out my eyes than make assignments asking them to evaluate whether ChatGPT can summarize the Apology. The point of assigning summaries is not that the world needs more bad summaries.
Also, what's AI? Grammarly? ChatGPT?
I've had some success asking the class to design the AI policy for the course. Last semester my class landed on: free Grammarly is okay, ChatGPT is not, Dr. Cala will not chase anything down that doesn't flag in the course software, Dr. Cala of course does not think that the course software is the last word, and Dr. Cala will just give a zero for work not done with no makeups.
I figure in about five years I'll know how to do integrate AI. But right now we're at "don't trust that Wikipedia" stage and fuck if I'm playing gotcha instead of teaching philosophy.
60: one of shiv's junior guys recently told him: I think I need not to use ChatGPT because I don't understand yet what it outputs. That man can be taught!
https://x.com/jzellis/status/1810806144492789825?s=46&t=nbIfRG4OrIZbaPkDOwkgxQ
I've heard of computer science classes assigning some handwritten work in a bizarre belief that it would cut down on chatGPT use, rather than shift that use to copying chatGPT by hand onto paper.
Oh! My pet peeve. Intro compsci teaching is far more threatened than intro humanities by ChatGPT, because it does that job perfectly, and it's undetectable. But the lousy journalists like to wax on about how they're being replaced, so it's a Problem for the Humanities. Not for junior coders. Tech never has a bust.
really unbelievably good at extracting text/transcripts from images
Transcribing documents into and out of airgapped systems?
80: you may think Ttam is an Indiana Jones-cum-007 figure posing as a scholarly librarian to access the world's secret archives but I couldn't possibly comment.
Great now we have two Call of Cthulhu PCs on the blog.
Paraphrasing a Bluesky thread launching from the one I previously transcribed, someone whose job involves fact-checking animal facts says it used to be if you googled something random like what does the average elephant weigh, you would get some bad information, including some that looked authoritative like zoo websites, but most of it would be okay. Now, the top 10 results are all garbage, and so is the Google AI response.
"It thinks possums eat 70 bazillion ticks a year, it thinks mantis shrimp can see hundreds of colours, it thinks sharks don't get cancer etc etc."
83: but this person is not telling the truth!
I googled "what does the average elephant weigh" and the AI response was "between 5000 and 14000 lbs" and the top ten responses were all accurate, and were from, in order:
Elephants for Africa
IFAW
Seaworld (which is I admit slightly surprising)
Tsavo Trust
WWF
Global Elephants
Denver Zoo
Quora (an answer quoting WWF)
London Zoo
San Diego Zoo
As for the other examples, if you google "how many ticks does a possum eat per year", eight of the top ten results are reporting on a recent study saying that in fact possums in the wild don't eat ticks, contrary to previous belief. The same is true of the mantis shrimp one (they can't actually see hundreds of colours) and the shark one (sharks do get cancer).
I think the lesson here is "you can still trust Google, but you can't trust randoms on Bluesky chasing engagement".
Perhaps I should set myself up on Bluesky as "someone whose job involves fact-checking people whose job involves fact-checking animal facts".
Doesn't google give different results depending your past search history? I would assume the people most likely to not have knowledge to know if a google result is reasonable to be the ones most likely to get a really stupid result from a google search because they don't have a history of looking at sites with real information.
The two fun things about chasing weird AI/search results are:
- companies quietly "fixing" specific weird results and then saying "we have no idea what you mean"
- people making stuff up
Doesn't google give different results depending your past search history?
Yes, good point - but I've just repeated the searches using an incognito browser, to check. The answers were slightly different - different order - but not materially. Certainly no obvious rubbish.
I would assume the people most likely to not have knowledge to know if a google result is reasonable to be the ones most likely to get a really stupid result from a google search because they don't have a history of looking at sites with real information.
But the comparison here is between me, someone who basically almost never googles stuff about wildlife, and this fact checker person, who does it for a living. If anything, you're implying I should see more stupid results than she does - but she's claiming it's the other way round.
companies quietly "fixing" specific weird results and then saying "we have no idea what you mean"
Good point. To check that, I pulled my Collins Guide to the Insects of Britain and Western Europe off the shelf, opened it at random, and tested Google's knowledge of the insects I found. No obviously weird or erroneous replies about the Egyptian Grasshopper, the Poplar Hawkmoth or the Cuckoo Bee.
people making stuff up
Yep.
I bet fact checkers Google stupid stuff all the time. The news is stupid, too check facts on reporting about Republicans, you'll be searching scams and medical misinformation.
If someone here tells me they got a nonsensical AI response and ten nonsensical front page results from a Google search, I'll believe them, even if I can't reproduce the result myself with an incognito search.
I would want to know what the question was, though.
"Was Gollum's penis animated behind his flap?"
"Is anything worn under the kilt?"
"No, ma'am, it's all in perfect working order."
Great now we have two Call of Cthulhu PCs on the blog.
That is very good.