I'm happy to use "Dunning-Kruger" as a shorthand for the phenomenon where novices are sometimes overconfident of their ability and experts acknowledge gaps in theirs, whether or not it's ubiquitous.
It feels like a more political topic than it was when I personally first heard about it way back in college. I have less interest in or patience for political discussions these days for obvious reasons, so if I'm wrong about this, I don't want to know it.
I always get this mixed up with the test that tells if you are a replicant.
I'm really tenuous and contingent when I'm working in my field of expertise. Drives coworkers nuts sometimes.
I'm also very confident you have a problem with your alternator despite the fact that I could probably not pick out an alternator if it wasn't in a car.
It's easy enough to find in a car because there's a belt from the engine to it.
You can't drive your coworkers nuts if you can't drive them anywhere.
You could if your computer worked.
But there's a hole in the alternator, dear Henry, dear Henry.
That's how the electricity drains to the battery.
One time I rebuilt an alternator because I got tired of constantly having to jump start.
you can get open-source software that makes xkcd-style charts - here's one for Matplotlib and presumably everything that's downstream of it:
https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.xkcd.html
Can the software make me funny and knowledgeable about stuff like physics?
I so want the Dunning-Kruger effect to be real (to be recognized by the relevant experts/authorities as a genuine socio-psychological phenomenon, and not just the grumpy artefact of my own exasperated response to yet another tiresome encounter with some dumb*rse who is too stupid to realize just how deeply stupid he actually is....), that I am willing to over-confidently, and on the basis of no special ability or expertise whatsoever, dispute the claims of this article!
(Also: admittedly, I haven't yet read the article. Off to read the article...)
As I recall, it when something like this:
- generate N pairs of samples (e.g. N1 = {x1, y1})
- bin the samples by the first number in the pair (e.g., bin by x1)
- calculate the average of the second number in the pair for each bin (e.g., mean y1).
And guess what!
In the low value bins the average of the second number will be higher than the binning value (because it will be the mean of the distribution the second value is drawn from). In the high value bins the average of the second value will be lower than the binning value (because, again, the mean).
In reality, there will be some small correlation between the numbers, so the stupidity of the analysis is not quite so evident.
16 was my understanding, too. A friend recently posted something about Dunning-Kruger and respecting science, and I couldn't resist replying with a link to that article and saying science suggests Dunning and Kruger were mistaken, no matter how much I'd like it to be true.
16 is how I learned it too. But I learned it in a class on regression, so that is the kind of thing they say.
I want the DK effect to hold up as the empirical evidence of "beware the man of one book".
Unless that book is the Silmarillion, because they'll never finish.
There's no existing or possible world where "beware the man of one book, where that book is the Silmarillion" is bad advice.