Facile unto self-parody. AIHMHRISC, Tooze; podcast on interesting-sounding book.
Is the proposition: Low-overhead implementations of seeing like a state is distorting, makes quantitative thinking too easy and too apparently authoritative ?
Interesting idea, IMO points out a pervasive but low-level problem, especially if Noah Smith is a model practitioner which is maybe fair, he's relatively successful for what he is.
But IMO the core problem confronting OECD societies is a post-truth worldview, not a technocratic lack of nuance. Being able to say repeatedly and clearly that eg US life expectancy is falling, or that most new cars sold in China are electric, or just to look up whether absolute per-country CO2 emissions are rising or not, is IMO absolutely a net benefit.
Both individually as a counterweight to national media (Time suggested the Khakhovka dam explosion might be Ukraine's Chernobyl), and also to enable analysts yes even ones like NS to reason from solid, clearly sourced foundations is a huge net gain.
Also, maybe this is harsh, but finding analysts who are not numerate and who are worth reading for any large-scale social issue is not easy. Who's worthwhile that's being talked over?
Facile unto self-parody.
I'm glad to see that response, because (I hope) that means we have enough disagreement to have an interesting discussion.
The obvious question is, "whet information do you look at to compare COVID outcomes among various countries?" Or life expectancy, or renewable energy deployment, or, heck, Access to clean water*
My default starting point is that (a) trying to manage progresses based _only_ on abstract data summaries is likely to go astray, but (b) collecting consistent data and making it available is difficult, expensive, and very, very valuable. I haven't listened to the linked podcast but I have a strong default belief, that it's absolutely import to have access to the data. That said, Dan Bouk has put together what appears to be an excellent resource which catalogs key caveats (all of which I'd agree with). It offers the following secttions.
1) Modern Societies Are Built to Trust In Official Numbers. They Event Let Official Numbers Make Key Decisions.
2) Official Numbers are Made, Not Found
3) When Things Are Going Well, We Forget That Official Numbers Had To Be Made
4) Institutions Make Public Data and They Make Data Public
5) Official Numbers Are Political
6) Consensus on Official Numbers Requires Work (It isn't certain that the givens will be taken.)
* About which they say, "Are we making progress? The world has made progress in the last five years. Unfortunately, this has been very slow. In 2015 (at the start of the SDGs) only 70% of the global population had safe drinking water. That means we've seen an increase of four percentage points over five years. "
2.1 is elegant and co-signed.
I know nothing of the OP site. Maybe it does useful work; I fully agree access to data is better than not having access.
The obvious question is, "whet information do you look at to compare COVID outcomes among various countries?"
The obvious answer, for too many political actors (from individual to whole-of-state) is, "I don't care."
Datasets are simply tools, and making them available to more actors means more actors will use them. Some of those actors will work toward "progressive" ends (however OP site is defining that); many will not. All will succeed or fail as political actors in political contexts.
No quantity or quality of data will stop bad actors from bad action, or stop good actors from fucking up, or stop people from believing what they want to believe.
Yeah. Covid really drove that last point home.
I agree with teo's criticism, but I didn't read it as "I am worried about using statistical measures devoid of context". I don't think Noah Smith is using his data wrong - his description of the state of Ghana, and indeed Africa, seems accurate.
Where he needs the context (I think teo is saying) is in answering the very important questions "how did Ghana get like this" and "what should Ghana do next". Those are not questions that Our World in Data can answer.
No quantity or quality of data will stop bad actors from bad action, or stop good actors from fucking up, or stop people from believing what they want to believe.
That seems to me to be a completely unrealistic counsel of despair. Really? Good actors will fuck up at exactly the same rate, regardless of the quality of their data?
Noah Smith's also wrong about this: In Europe, it was British industrialization that provided the model for continental Europe (though they made their own tweaks).
No, absolutely not, wrong. Read Robert Allen. British industrialisation is the least useful model for any other state that wants to industrialise and every industrialised state other than Britain has got there by a very different route. Britain industrialised in a non-industrialised world. Every other industrialising country had to wrestle with a problem that Britain didn't have, i.e. the existence of an already-industrialised Britain!
Malka Older's Infomocracy had an interesting future society with a global governmental superstructure ensuring abundant accurate data and making sure it is injected into the discourse, appending speeches etc., as much as possible. Of course that's not enough to utopia; I haven't read the whole trilogy; but it was obvious politics had moved to a much more functional baseline.
6.2: Point taken. The rate of fuckups should decrease. My position remains: political problems, and their solutions, are political; data are tools in political processes. I don't counsel, and at a global level never have counseled, despair. I write harshly in this thread because OP excerpt reeks of precisely the blithe ignorance you (and I, and others) pointed out in Smith.
4/8: just to clarify-- I was trying to ask two questions in the OP, both, "how valuable is this data?" And, "how convincing is this explanation?" It sounds like you are answering the second, in the negative, but offering no opinion on the first. That means the actual degree of disagreement is difficult to assess.
Longer comment, now that I'm back on my computer:
I agree with teo's criticism, but I didn't read it as "I am worried about using statistical measures devoid of context". [rather context is necessary for] answering the very important questions "how did Ghana get like this" and "what should Ghana do next". Those are not questions that Our World in Data can answer.
Part of why I wrote this, rather than continuing the conversation about Noah Smith is that I thought it might be a way to generalize to some broader questions. But I do think there are some useful parallels to be drawn. In my opinion the biggest argument in favor of Our World In Data, and Noah Smith's better work is that (a) they a built on a framework of concrete, publicly available data*, (b) the data is more accurate than "takes" which mostly analyze the world in terms of preexisting ideological beliefs, (c) that they may be a starting point that prompts additional research, and not necessarily the final analysis**. The argument that they (Noah Smith or Our World In Data) are less valuable than they appear (or even harmful) is some form of (a) they offer a cheap facsimile of expertise*** which (b) does not encourage further inquiry because, (c) at first glance it appears complete.
I am inclined to the positive assessment, but I think the caveats are important that I thought it could be useful to have a thread to discuss.
* which I think is valuable even with the caveats mention in 3.
** I will also defend the fact that people who read one broad summary of an issue and stop there are genuinely better informed than people who haven't read anything about it; they should just have some humility about the extent of their knowledge.
*** I think this is part of what 2.1 suggests
TL;DR: OWID appears to be providing a service of positive but extremely marginal value; their work cannot be remotely as valuable as they appear to believe it is. (I too cosign 10. This is not inconsistent with 1.) How valuable are the data? As valuable as the underlying research. More important, how valuable is their synthesis and presentation of the data? I think this collapses into the question Nick posed as "How convincing are their explanations?" Not even slightly. Manifesto*:
The news media is neither drawing our attention to the large problems we face, nor to the fact that we are making progress against some of them. The news media focuses on daily events, but neither the big persistent problems (such as those listed above) nor the progress against them find a place in news cycles. Our education systems are also not making us wonder how we can make progress, we are hardly even learning about the progress we made.[**]
To the extent this is true, it is true despite the availability of optimistic data. Peddlers of gloom peddle it because it sells. Republishing already existing good news will not change the incentive structures that caused the good news to be buried in the first place. If OWID can act as a wire service providing prepackaged good news for other publications that hopefully would make a marginal difference, especially perhaps in media markets like India where print hasn't been wiped out***; though for truly influential, ie. broadcast, media with limited screen time I doubt it would get anywhere. Injecting optimism into school curricula would be worth more, but those again are questions of politics. OWID may prove handy for someone somewhere in one of those decisions, but again, the data already exist, and the actors will pursue their own ends, not OWID's.
We believe that a key reason why we fail to achieve the progress we are capable of is that we do not make enough use of this existing research and data: the important knowledge is often stored in inaccessible databases, locked away behind paywalls [research access] and buried under jargon in academic papers [research popularization].
Access: journal pricing is indeed a problem, but I don't believe for a second it's a "key" problem obstructing global progress. Macro, progress depends on policy decisions by states, which do have access to the relevant data (or can at least use scihub like everyone else); micro, it depends on private actors (companies, bureaucrats, individuals) being able to get shit done. Will data /research access help them do that? Many times, many places, yes. Will OWID be making available the kind of data that people will actually find useful, day-to-day? Based on their front page, no: they're collecting very high-level macro information.
Popularization: a different version of the media case: the data already exist, and popularizing publications already exist; maybe OWID can be useful at the margins.
*Turgid and tedious; I won't bother finishing it.
**I question whether this last is true of the basic education curricula serving a solid, if not vast, majority of people in the world.
***Though I doubt Indians lack for optimism; AFAIUI optimism is central to Modi's shtick, and they lap it up.
I find it difficult to discuss the value of good data in the context where the executive class seems to think of bad data as almost as good as good data, rather than often worse than no data at all. There's just a real imbalance between how happy executive-types are to spend money on analyzing data or punishing workers based on data and their willingness to spend money gathering *good* data. So I'm inclined to be instinctively anti-data in the workplace, even though I love data as an amateur or as a sports fan.
For example, there's a huge push to judge faculty at every stage based on course evaluations *at the very same time* that they moved course evaluations online to save money and so the response rate is so low as to make the data completely worthless (worthless even for the narrow goal of measuring student happiness, which is of course heavily biased by race and gender of teacher and not very correlated with any kind of learning).
Or every time you interact with some service person you get sent some long survey with like 40 questions rated from 1-5, but unless you put 5's on everything the person gets in trouble. That could just be one yes/no question! If you want me to rate things from 1-5 then 4's had better be good! And there's no excuse for asking more than two questions anyway!
I now refuse to fill out any survey at work or with kids' schools unless there's a comment section for me to complain about survey design. Outside of that, I refuse to fill out any survey.
If you have enough bad data then it becomes good data by virtue of being big data. My AI tells me so and have you seen the training it went through?
Has no one told them of the Literary Digest poll?
Peddlers of gloom peddle it because it sells. Republishing already existing good news will not change the incentive structures that caused the good news to be buried in the first place. If OWID can act as a wire service providing prepackaged good news for other publications that hopefully would make a marginal difference, especially perhaps in media markets like India where print hasn't been wiped out
I would encourage you to click through some of the links (either the one in the OP behind, "bring together a wide range of data on a single topic" or the one on clean water access in comment 3. Because you may be confusing the explanation of what motivates them for a description of what they are actually doing. They aren't writing articles about good news; they are trying to make a wide variety of public data accessible, and searchable so that people can see, good news or bad, what's actually happening, with the hope that it will motivate people to think that interventions can have an impact.
It is still perfectly reasonable to question whether the data they use is of high enough quality to support the conclusions that it is used for, but it's worth looking at the site.
I just tried clicking through the website to the first topic I spotted that I knew a little bit about:
https://ourworldindata.org/eradication-of-diseases
The weird thing is how little data there is on that page, so I feel like I'm missing what's going on here. It's a solid white paper on the topic, I'm not criticizing it, but I don't understand what it's being phrased in terms of "data."
23: it looks like that doesn't have much, but you can click the line that says, "Interactive charts on Eradication of Diseases" to see some charts.
22 me.
In virtue of being the weirdo who links to books in WorldCat I stumbled on this thing, which, before it wanders off to be Marxist on its own, nicely distills the issue:
Tooze is specifically interested in the actor-networks of bourgeois elite decision-making [...] focus on elite decision-making and his audience of young technocrats looking for policy solutions to the "polycrisis." [...] the critique of political economy focuses on why elite decision-makers, in spite of their legions of bean counters and exquisitely credentialed technocrats, have not only failed to deal with the systemic crises of the "Risk Society" but actually keep making them worse. [*...] joins the chorus of Keynes and Krugman in their impotent cries against the failures of decision-makers to follow the obvious paths unearthed by science.*Misrepresents Tooze, who definitely thinks things have gotten better, eg. calling IIRC automatic stabilizers (leftover Keynesian technocratic instruments) the unsung heroes of the Great Recession.
Instinctively suspicious of anyone who, in the 21st century, calls an article "On [Something]" because it's transparently an attempt to acquire stolen gravitas.
It's not like we were using the gravitas.
16: I've taken to responding to student evaluations in my annual report with discussions of statistical relevance and literature about the biased nature of the instrument. In general, the problem isn't so much data collection -- it's the problem of the measure becoming the target. We recently started collecting and measuring the DFWI rate (basically, the non-pass rate) by faculty member, and we're told that 30% DFWI is bad. Implication: this is the fault of the instructor. Now, there are actually very good reasons to collect this data -- the course may be bad! Students might be taking the course when they're not adequately prepared! The course might be popular with a demographic of students that is underprepared for other reasons! There are plenty of reasons that a university might want to know which courses are the ones that are difficult for students to pass.
But if you simply send out a spreadsheet to every chair telling them that they're tracking DFWI by faculty, the faculty (in one department) will respond by preemptively kicking out underperforming/no show students before the drop date. They won't show up in the data then! Problem solved, uh, right?
I agree with teo's criticism, but I didn't read it as "I am worried about using statistical measures devoid of context". I don't think Noah Smith is using his data wrong - his description of the state of Ghana, and indeed Africa, seems accurate.
Where he needs the context (I think teo is saying) is in answering the very important questions "how did Ghana get like this" and "what should Ghana do next". Those are not questions that Our World in Data can answer.
This is correct as a statement of my views. The sort of data collection and aggregation that OWID does is necessary but not sufficient for addressing the issues they want to address, and the same is true of the kind of data analysis that Noah Smith does.
As for OWID specifically, what they actually do seems fine. They do seem to put a lot of effort into ensuring that the data they aggregate is actually as rigorous and accurate as possible, which is not always the case for data aggregators, so that's good. If you're looking for the best information about some question that lends itself to data collection, they're an excellent resource to check.
Their rhetoric, though, is very reminiscent of the Effective Altruist movement, especially early on, and I suspect there's a lot of overlap in personnel. There's certainly a lot in worldview. Given where that movement ended up going I'm hesitant to trust the trajectory of OWID. If they stick with data aggregation they'll be fine but this sort of rhetoric has a tendency to encourage scope creep.
One thing that's become very apparent from my epidemic project is how often the necessary data to answer a given question just doesn't exist and never will. This is particularly the case as you go back into history, even quite recent history.
28 last: that's the D in DFWI, right? At least at my institution, kicking them out would contribute to, not avoid, a high DWFIwhatever.
I would agree with 29.
Their rhetoric, though, is very reminiscent of the Effective Altruist movement
I had that thought when looking up the excerpt for the OP. On the other hand, I also find much of the EA rhetoric appealing, so I may be too kind to good intentions.
31: I thought the 'D' was just the grade 'D'.
I was curious about the OWID background, and it looks like it was a single person project that became a team-based one housed under some umbrella in Oxford. They're supported by a non-profit that seems to have been created for that purpose, with assistance from Y-Combinator. So not necessarily bad, but probably why the mission statement sounds to me like a pitch.
31: It's just the grade 'D.' Kicking out underperforming students does improve the class averages, but I suspect that's not the intervention anyone is going for.
When I was in college, they had to take all graduates from Nebraska high schools. But they flunked out many. Hopefully, they didn't hold that against the instructors.
Here the drop date is the first week, so you can't somehow kick people out without them getting a W.
Of course we're still expected to have DFW rates comparable to those at other institutions where students can drop the class without getting a W! Our DF rates are not so different from peers but our W rate is much higher. (To be fair some of the deans are willing to understand that argument to some extent, but they're still being judged by DFW rate by their bosses, so there's only so far their reasonableness goes.)
Our real problem is that students need a B to get into certain desirable majors and so also will withdraw if they don't think they're getting a B, which is both outside our control and not comparable to peer institutions. But still DFW number rules all...
Without the 'I', it's too airport-y.
Apparently part of the airport actually is in Irving. I did not know that.
Have you even seen drop-add from DC-9 at night?
Southwest has been pissing me off lately.