Yeah, I'm not sure why a Congress weak on antitrust would be speedier to pass this.
Clever policy workarounds often aren't. Sometimes it's just a long slog.
All three parts of this strategy are critical to its success. Progressivity is the automatic way to offset the increasing returns on investments in code that give bigger firms an advantage over their smaller rivals. Revenue is a better base than income for any new tax. Income is the difference between revenue and cost, which can be incurred in different places, so there is a fundamental ambiguity about where corporate income. This opens up the possibility of avoiding any tax on income by shifting income from high-tax to low-tax jurisdictions. There is no such ambiguity about sources of revenue. Taxing only the revenue from digital advertising is the best way to encourage firms to switch to less dangerous ways to be compensated for the services they provide.
In a very broad sense, this is what the various digital services taxes already being proposed by the UK/EU/OECD do. They are based on revenue not profit, and unlike most taxes they don't require a physical establishment in the jurisdiction to mitigate revenue shifting (eg Facebook claiming ads in the UK are "sold" in Ireland). They're generally flat tax rates, not progressive, although I think they all have revenue thresholds below which nothing is payable.
I don't think it's just a policy workaround. The advantages compared to content regulation are:
1) There's good reason to think that the business model of "gigantic ad-supported internet company" produces negative externalities. So it's Pigovian tax of sorts, designed to push companies towards other models*.
2) There's good reason to think that size makes the negative externalities worse. This is the question the proposal raises for me -- how much advantage would be gained by breaking google into a few smaller (but still gigantic) companies?
3) Regulating content is not generally a good idea.
4) Anti-trust is not only politically complicated, it's an intellectually contested terrain right now. I hope that better frameworks for anti-trust enforcement emerge, but I think that will take a while.
I find (1) and (3) fairly persuasive; I'm not sure about (2) and (4).
* See, separately, the argument quoted here
I find it bizarre that the world has decided that consumer internet is the highest form of technology. It's not obvious to me that apps like WeChat, Facebook, or Snap are doing the most important work pushing forward our technologically-accelerating civilization. To me, it's entirely plausible that Facebook and Tencent might be net-negative for technological developments. The apps they develop offer fun, productivity-dragging distractions; and the companies pull smart kids from R&D-intensive fields like materials science or semiconductor manufacturing, into ad optimization and game development.
Game development is surely one of the most R&D intensive things there is.
I think Nick's #3 is the big reason-- we don't want governments to regulate speech, particularly asking government members to regulate the speech that got them elected. It's less fraught to convince them that digital ads should be a revenue source with special tools to prevent tax shifting.
Intel spends about a quarter of its revenue on R&D. Ubisoft spends over a third.
Did y'all see that Google (technically their subsidiary, DeepMind) put the entire human proteome online, and open-sourced the code they used to predict the structures? https://www.sciencedaily.com/releases/2021/07/210723095647.htm
How about Facebook's multilingual translation model that goes directly from any of 100 languages to the other without going through English? https://www.zdnet.com/article/facebook-ai-open-sources-multilingual-machine-translation-model/
Guess where the revenue to fund this stuff comes from. And btw, the hardware improvements in matrix-multiplication speed that kick-started the deep learning that these projects rely on, after everyone wrote deep learning off for dead in the 90s? Came from the GPU which was developed as a way to render graphics faster for video games.
7 is another good reason why a tax would be preferable to anti-trust regulation. It's quite possible that it's better for the world to have google, with a modified revenue model, than to break up google.
The only problem with ad-supported models is that the ads aren't targeted enough. They are usually trying to sell me shit I don't want to buy. I laud the ad-targeting teams at these companies for trying to understand me well enough to figure out what would be useful for me to see. At least their hit-rate is higher than dumb TV and magazine ads.
While I support the general point advanced by 4 et seq, I don't think that the massive consolidation of the consumer internet was or is particularly necessary to the general advance of technology. AFAIK the gaming industry isn't remotely as consolidated as the consumer internet; chip design certainly isn't.
I grant that Google in particular has generated tremendous public goods, but am not at all convinced that equivalent technologies couldn't or wouldn't have been developed otherwise, albeit less quickly in many cases;* and I stand to be corrected but believe that Google is very much an outlier in its beneficial spin-offs.
*The exception here is the concentration by these companies of training data and the consequent acceleration of related ML; but the data are both concentrated and hoarded, and, even with the best will in the world, so small a group of actors cannot make on behalf of all (non-PRC) civilization the optimum use of those data. This alone is an excellent reason to break up at least some of these companies and to make at least some of their data more openly available (though how best to do that I concede is a brutal problem).
we don't want governments to regulate speech
Is an extremely US-centric view. The first amendment regime is extremely permissive even by the standards of consolidated democracies, much less the world, and the utter obliviousness to this fact of US tech leaders (especially at Facebook) has had severe deleterious effects worldwide. (In addition to dealing a possibly fatal blow to your own republic. I'm dismayed that any informed American can still in 2021 condemn censorship without any qualification.)
I'm not at all confident the thing censored wouldn't be "Biden won by a comfortable margin."
I'm close to 100% confident that most of the videos of police shooting or beating unarmed civilians would be among the first things censored.
Fortunately, breaking up tech firms that get too big is not regulating speech!
Further to 11, I think that Facebook and Youtube in particular have to be broken up, because of their demonstrated institutional indifference to content moderation in particular and the public good in general. I also think privatized content moderation in general won't work unless the moderation is spun out into separate companies whose sole business is moderation-as-a-service, and which platforms of any size are required by law to hire, comparable to the way public companies are required to hire auditors.* For the platforms moderation will always be a cost center and they will always do the least they can possibly get away with. MaaS could also have spin-off benefits like reducing the barriers to entry for new platforms, or for moderators to specialize in particular countries or other niches.
It would also separate counter-disinformation efforts from the platforms. Whereas platforms (FB in particular) have a conflict of interest between maximizing user numbers and countering disinformation,** the MaaS provider doesn't. Further, given on the one hand Valley demographics, and on the other that a hypothetical MaaS sector would initially be built from the moderation departments of the Valley incumbents, it's also likely that the first wave of MaaS providers would lean in favor of the reality-based community. Of course the market alone isn't going to solve that problem, but incumbent advantage might work in favor of civilization for a change.
*I suspect the Commission would love this analogy enough that it alone might give the MaaS model staying power were it ever implemented. The auditor model would also argue for the breakup of the platforms, so as to reduce MaaS dependence on their clients.
**Especially but not solely in the US.
12 and 13 are true, but I stand by 11 last nonetheless. On the corpses of the hundreds of thousands, if not millions, of your fellow citizens who will ultimately be killed by covid denial alone (not to mention all the non-citizens).
More constructively, 12 and 13 don't show you should keep the first amendment; they show how far you need to go before you can start amending it. The de facto privatized censorship I outlined in 15 might take you some of the way.
I'm at least a couple of months past giving a shit about the health of voluntarily unvaccinated adults who get covid. I know they are hurting lots of others, so it still needs to be countered. But really, if someone is going to believe Tucker, I don't even know how to start to help them and if they get sick, the example will help others better than correct info on Facebook.
17: I sympathize. Since I'm banned already: It's like Deepwater Horizon. You can't get the oil back in the well, but you have to try to cap it.
I don't know how to cap it in any way that would be more effective than living well as people who ignore vaccines get sick.
Anyway, I'm on team tax the rich and break the monopolies. If that's supposed to be separate teams, I didn't understand the assignment.
The first amendment regime is extremely permissive even by the standards of consolidated democracies ... I'm dismayed that any informed American can still in 2021 condemn censorship without any qualification.
And let us remember that the First Amendment isn't even in play here. There has been no serious proposal* that runs afoul of Facebook's First Amendment freedoms, which, as you note, are quite expansive.
I was once a First Amendment extremist and an Internet triumphalist. I've gotten over it.
*That is to say, no proposal made by a Democrat.
Is there evidence that vaccine trutherism on FB stopped more people from getting vaccinated than accurate vaccine info on FB helped people get vaccinated? My guess would be that a lot more people are vaccinated than in a hypothetical world where FB shut down in January.
23 last: That's what FB says, and they say their reading stats support it. I believe them, but that's despite them saying it, not because of it.
23 though is the wrong question. Would there be more or less covid denial if FB had shut down in Jan 2020? I'd guess less, especially outside the US (inside, impossible to be certain, because Fox). More or fewer anti-vaxxers, pre-covid? Definitely fewer, FB algorithms promoted their groups, management refused to do anything pre-covid. More or less qanon? Less, same reason. More or fewer Trump administrations if Twitter had banned him in 2016 (or earlier, for birtherism)? Probably fewer (again, Fox). More or fewer pure Trumpists winning primaries? Definitely fewer.
I'm on Team Fox is Way More Pernicious Than FB, fwiw.
25: Are you still reading Kevin Drum? https://www.motherjones.com/politics/2021/07/american-anger-polarization-fox-news/
I also think privatized content moderation in general won't work unless the moderation is spun out into separate companies whose sole business is moderation-as-a-service, and which platforms of any size are required by law to hire, comparable to the way public companies are required to hire auditors.
This is intriguing, but it should be noted that auditing scandals are very far from rare. There would also, I assume, be a tension between the mandatory third party moderation (to the legal standard/code of practice) and the in-house moderation (to the site's TOS). I could see that playing out in a lot of difficult ways.
26: Finally got around to reading Drum's piece. He's good on the subject of Fox News in general, but it's really reductive to talk about these things in a monocausal way -- especially when Drum is quite clear that only something like one percent of the country is actually watching Fox in a given primetime evening.
The question is, how is Fox News amplified in the mainstream media and the social media -- and how is Fox itself influenced by politicians and other players?
Drum gestures towards those issues, but seems to view those other actors as lacking agency. Has the Trump movement influenced Fox, or has Fox influenced Trump? Both. Have other media voluntarily amplified Fox falsehoods and ignored -- or supported -- its malfeasance? Of course.
Drum gestures towards those issues, but seems to view those other actors as lacking agency.
Lots of people seem to treat Republican-shitheadness as a given trait. It's got a lot of explanatory power as a hypothesis.
Not just the Republicans, but social media and the mainstream media, too. In a sensible media environment, the behavior of, say, Tucker Carlson would be an ongoing scandal.
I'm more sympathetic to Twitter and Facebook. They were thrust into a new situation and are still figuring out the rules. But they still fucked up badly and bear a lot of responsibility.
25, 28.2. Fox News is amplified by people linking to it on FB, Twitter, etc. Google amplifies partly by the level of amplification of posts, and "moderates" based on Google's politics. The outcome is the same, though. What is shown on Fox gets talked about because it is click-baity. (You can follow the same path with CNN, which is the most Fox-like news source.) Like all other news channels they have their sources who provide unbaked click-bait. Fox bakes it and their followers eat it. (Sometimes the legacy entities such as newspapers, magazines and even blogs do some provisioning and/or baking.) It's no different for NPR, or MSNBC, other than their audiences being much smaller. (How often does an NPR story "go viral"?)
In summary I don't see Fox News amplification as being any different from other news sources' amplification.
FWIW, I'm still a 1st Amendment extremist. FB et al. aren't violating the 1st because it doesn't apply to them, although the talk about taking "advice" from the government on misinformation is getting close to a violation (on the government side). There are some pretty clear court precedents on that.
TBC, when I talk about first amendment stuff above, I'm referring to the rights not of the platforms but of the individuals using them (although regulating the individuals would implicate the rights of the platforms inasmuch as platforms are press-like rather than common-carrier-like). I didn't mean to imply that the platforms themselves violate or could violate anyone's rights (as indeed they aren't the state).
taking "advice" from the government on misinformation is getting close to a violation
Yes. The inadequacy of the first amendment could hardly be more obviously demonstrated.
27: Comity. I just think the auditor model would be less of a clusterfuck than the status quo.