I don't remember how long, but I can reassure you that it does pick up again.
My sense was that the book was fundamentally about the cybernetics. "Unaccountability" was the hook for "why you should care."
(And should repeat again that I thought it was fascinating and useful, which might not have been clear from my post limited to making fun of Beer's career in the British Army.)
I guess I'm not yet far enough along to believe that this topic can be modeled usefully or successfully understood. It seems so easily mired in over-complexity.
The rest of the book after that chapter is more a post-mortem on why economics doesn't get it. That is, why does economics ignore the most important economic institution of our time, the for profit business organization. YMMV, but I preferred the chapter on cybernetics as weird as it was because I've heard the econ doesn't get it story over and over. He has his own finance/macro take on this story, which is clever as always, but...
I guess I'm not yet far enough along to believe that this topic can be modeled usefully or successfully understood. It seems so easily mired in over-complexity.
The engineering use of the model tends to overcomplexity. But for understanding, complex organizations ought to be modeled. Davies makes his case for cybernetics here. There are other approaches that haven't tried but failed to catch on focusing either on information or incentives. But good modeling simplifies the problem to try to illustrate something at the heart of it.
It is interesting to compare to Holocracy, the so-called "self-management" movement, which also became mired in over-complexity. (it's been called management for people who loved D&D in middle school) It does seem like a tendency of this type of management system to move toward overcomplexity - trying to anticipate too many contingencies.
But this is something that a good model could explore - why? And what are information efficient shortcuts that don't degrade performance too much. How much do they relate to the typical business forms we see? How would they interact with digitalization or AI in the system?
Speaking of unaccountability sinks, I've been on hold with the Walgreen's pharmacy for AN HOUR and I'm so frustrated, and there's a real possibility it will either hang up on me (again) or I'll have to hang up to go teach my class, or my bladder may burst.
Does he talk about how to assess if a model is actually a good model or not? Or principles of evaluating the quality of the model? Or is that just a matter of "have good judgement and a solid understanding before you try to make your decisions on what's in the black box and what isn't, and what the regulator is going to be, etc"?
It is my strong personal belief that you can pee while on a call with Walgreens, unless you are on a phone with a cord or there's no cell service in the bathroom.
Even if I'm at work? It seems weirder at work, I suppose. Anyway, I'm now also on a chat with them.
What is the point of tenure if you can't take your phone into a bathroom stall while on hold for an hour? (To be clear, I'm mostly kidding, but don't give yourself a UTI out of decorum.)
Actually, the chat guy gave me good advice after all - he said to hang up and call the store manager. The store manager is now talking to the pharmacy.
Or is that just a matter of "have good judgement and a solid understanding before you try to make your decisions on what's in the black box and what isn't, and what the regulator is going to be, etc"?
It's been a few months since I read it, but my takeaway was pretty much this. Or rather, only look in the black box when it's not working they way you want or expect. But that's a retroactive rule, not prospective.
His substack keeps coming back to specific examples in an illuminating but somewhat rambling fashion. I like it.
I'm wondering if I could get him to come for an academic panel I'm organizing. Not sure what he'd be like in person or as a speaker. Awkward is my main prior.
My professional life is all about the unaccountability of people who should be accountable to me.
Just today I had someone schedule a meeting with a big chunk of my staff at a time that doesn't work for them -- but they must abandon their jobs to attend anyway. I wasn't even directly informed of this meeting. There are corporate imperatives that are entirely separate from the work of my group.
In the customer service realm, I spent, total, probably 10 hours working out the setup of my new Verizon equipment. The human beings were all genuinely terrific. The system they were operating in was a nightmare.
One of the really interesting things about the unaccountability of organizations is that they are all about making a show of accountability. I get surveys all the time -- at work and elsewhere -- that I am convinced are designed to not identify problems. So with my disastrous customer service experience at Verizon, I was asked how the employees did.
Assuming Simulated's memory serves in 13, is cybernetics really just another domain for unapplicable math? Like, it almost feels like math - black boxes are functions, variety is the range, the regulator is the domain, there's parallels to being one-to-one and onto, etc. I'm flailing trying to figure out if this is really just dressed up math, or if it ever solves applied problems connected to management and unaccountability.
To be fair, this is a philosophical vortex I get trapped in a lot, when things feel mathy, but claim to be applied, but don't seem capable of solving the actual problem at hand. Like I get really unclear around the goal of applied math sometimes. Are we just doing math that we find interesting and inspired by real life, or do we think we're making progress on a problem, or is there a third option?
if this is really just dressed up math
Isn't everything?
18.2: without actually understanding this at all when math enters social sciences I generally assume the 3rd option -- designed to make something that is actually a preference into the one true answer supported by science.
is cybernetics really just another domain for unapplicable math?
Essentially, yes. My recollection of Davies' description is that the people working on it believed that it could be systematized but, practically speaking, it's more valuable as a way of looking at the world and a set of questions to ask, than a specific toolkit.
As he says: https://backofmind.substack.com/p/maths-becomes-metaphor
Stafford Beer identified a similar issue in 1959 ("Cybernetics and Management", one of his first books and before the development of his main model). There's a transition which happens as the system gets very complex; it becomes absolutely impossible to write down a system of equations to describe it, let alone solve the equations. So although you know (or at least, strongly believe) that the maths still works, you can't actually do the maths any more. You're left using the same concepts, but metaphorically; based on your knowledge and understanding of how the model works in tractable applications, you can think about its likely behaviour and rule things in or out.
...
At its root, "The Unaccountability Machine" is me trying to say that we need a different set of metaphors to talk about our problems. I don't think I necessarily realised that when I started writing it.
this is really just dressed up math
For sure it is. I can see why as a mathematician you would hate a lot of the faux math that gets invoked in the social sciences. But wielded well, a simple model can offer some insight into a problem that just brute force thinking might take longer to get to. I say that as someone who is definitely not a modeler.
I like this story about good modeling that Hal Varian tells in his essay, How to Build an Economic Model in Your Spare Time:
Several years ago I gave a seminar about some of my research. I started
out with a very simple example. One of the faculty in the audience inter-
rupted me to say that he had worked on something like this several years
ago, but his model was "much more complex". I replied "My model was
complex when I started, too, but I just kept working on it till it got simple!"
And that's what you should do: keep at it till it gets simple. The whole
point of a model is to give a simplified representation of reality.
IMO a fair number of fields use mediocre modeling (ie too many free parameters, structure of model given by possibly relevant first-principles reasoning) that can then be improved when better/higher volume data appears. I see progress as usually being driven by better sensors, new kinds of data. I don't know that this applies to the kind of modelling Davies discusses, whether more and better data makes a difference.
Remember that companies are currently run through a bunch of people with excel spreadsheets that are incredibly conceptually simple but hugely complex in terms of the data they are trying to track and integrate to keep on top of revenues, costs, margins, etc.
That we might want some people with spreadsheets or software to keep track of and act on other types of information, as well, is a reasonable and hard but doable action item.
Ok, I feel like I have more clarity now. This conversation is helpful.
25; Please fill out this survey to rate your commenter experience to get 5% off your next front page post.
Is there a comment box where I can share my thoughts on the quality of the survey? That's one of my favorite pastimes.
Is it ALL COMMENT BOXES ALL THE TIME? Again, one of my favorite pastimes.
On a scale of 1 to 10 with 1 representing magnificent, and 10 representing spectacular and 2 representing wonderful and 3 representing perfect and 4 representing great.....
You rate a 1 as a commenter. Well done!
But wielded well, a simple model can offer some insight into a problem that just brute force thinking might take longer to get to. I say that as someone who is definitely not a modeler.
My hobbyhorse on the subject is grabbing the bit, sorry: insight and brute force thinking are also models. What's in our heads is models.
Mostly we know that "writing it down" and "talking about it with other people" is useful for gettting our heads more like the world. Equations and code are writing it down; one of their great advantages is that they can be too simple to let us use a term to mean one thing one place, and another thing another place, and pretend we aren't.
Fair point. I'm reminded of CharleyCarp telling about an argument he'd had, about whether the legal standard ought to be the words themselves, or if the words were a tool to describe an idea, and the legal standard ought to be the idea itself that is approximated by the words. (I'm probably poorly using words to approximate the story as he told it.)
We're such messy imprecise slobs.
27: here is the modified Likert scale for heebie:
https://bsky.app/profile/benmonreal.bsky.social/post/3l4tf6ypmpl2b