Maybe the CDC and WHO should take a few cues from “The Plandemic”




“The feed abhors a vacuum.” The CDC, WHO, and other public health agencies don’t understand how social media users get and want to consume information, Stanford Internet Observatory’s Renée DiResta writes in The Atlantic. That leaves a big space for misinformation to move in and rise to the top of feeds. This has proven particularly problematic in the case of COVID-19 — a virus that, as The Atlantic’s Ed Yong has written, is extremely confusing. (Here’s one effort to make it less so.) DiResta:

In the case of the coronavirus, the worst-case predictions of some of those early prescient voices on social media were borne out, even as some city leaders were telling the public — in March — to keep going out to theaters and the CDC was still insisting that only a very narrow group of Americans needed to be tested. Frontline doctors and scientists emerged in droves as the pandemic spread, posting on Twitter, Medium, and Reddit to tell the public about the number and severity of cases they were seeing in their hospitals; their stories further contradicted earlier reassurance from institutions that the flu posed a more serious risk to the United States.

And so a meta-debate began: Why were social-media companies elevating the WHO and the CDC when some of their information turned out to be incorrect? And if agencies like these were wrong about COVID-19, what else were the so-called experts wrong about?

Populist Twitter decries any misstep by authority as confirmation of wholesale ineptitude or corruption — as if a mistake anywhere casts doubt on expertise everywhere. But these institutions did make a costly error with long-term ramifications for public trust: Rather than communicating transparently, frequently, and directly, explaining the distribution of probabilities and potential outcomes informing their guidance, they were reticent.

Institutional medical authorities are bound by an ethical obligation to speak precisely and to hew to the facts — a constraint not shared by the Twitter and Medium commentariat. But when they finally do achieve a sufficient standard of confidence to make a statement, the pronouncement is often something that some faction on the internet has been insisting is true for weeks, so the authorities appear to be leading from behind. The CDC, which during the pandemic has largely operated in the background, is not structurally suited for the communication environment in which it must operate.

Stepping in to fill said vacuum… BuzzFeed’s Jane Lytvynenko writes about “The Plandemic,” “a video made to look like a professional news interview but in reality peddling long-debunked falsehoods about the coronavirus [that] has spread widely on social media since May 4.” In the professionally produced video, the discredited scientist Judy Mikovits “falsely claims that masks can make wearers sick, that sand from the beach can build up coronavirus immunity, and that as-yet-uninvented vaccines for the virus that has killed at least 75,000 people in the US are dangerous.” Facebook, YouTube, and Vimeo all said Thursday that they were taking the video down, though it’s been popping back up as users re-upload it.

Brandy Zadrozny and Ben Collins write for NBC News:

A hashtag related to the title of the viral video, #Plandemic, trended on Twitter on Wednesday evening, with more than 30,000 overnight tweets. A Twitter spokesperson said that the Mikovits videos had not violated the company’s COVID-19 misinformation policy. Twitter did issue an “unsafe” warning on at least one link featuring Mikovits, and blocked the hashtags #PlagueOfCorruption and #Plandemicmovie from trends and search.

The amplification of Mikovits’ account appears to be authentic, according to Twitter.

“Judy Mikovits” was Google’s top trending coronavirus search Wednesday and Thursday.

YouTube had removed several versions of the videos by Thursday morning for violating its community guidelines…

Before the removals, the most-watched videos had already racked up more than 6 million views, according to BuzzSumo, a social media analytics tool.

The video has since been mirrored and reposted to dozens of other channels and across many platforms. On Facebook, one video was shared tens of thousands of times and attracted more than 6 million interactions in less than 24 hours, according to CrowdTangle, a Facebook-owned social media analytics company. Because CrowdTangle does not include data from private groups, the numbers are undoubtedly much higher.

“A Band-Aid on a massive wound?” Facebook is launching an oversight board this year. It “will review Facebook’s decisions about what content to take down or leave up,” and “will focus on the most challenging content issues for Facebook, including in areas such as hate speech, harassment, and protecting people’s safety and privacy. It will make final and binding decisions on whether specific content should be allowed or removed from Facebook and Instagram (which Facebook owns).” The first 20 members (out of a planned 40) were announced Wednesday; one you’ll recognize is former Guardian editor Alan Rusbridger.

Here’s David Kaye, the UN’s principal human rights monitor for freedom of expression, at Just Security:

[This] does not end the debates over online speech. For one thing, the board will not have jurisdiction over the legal demands imposed by governments. When, for example, Singapore or Turkey or Egypt or France (or any other government) demands a takedown of content under its domestic law, that appears to be beyond the scope of the board.

What happens when a government complains about a decision of the board? Who wins that fight?

Further, difficult content problems often take place at local levels, in languages and code that may be impenetrable to those outside. Will the board ever have the bandwidth to address the massive impact Facebook will continue to have in communities worldwide? Will the board, in other words, be more like a Band-Aid on a massive wound than an appellate body to solve the crises of online speech?

Julia Carrie Wong writing for The Guardian:

The board’s makeup tends heavily toward legal experts and leaders, and Claire Wardle, the director of First Draft and an expert in misinformation, said she hopes they will bring on more ‘practitioners’ in the future. ‘These are not people who have gotten down and dirty in closed Facebook groups with conspiracy theorists,’ she said.

Still, Wardle said she was cautiously optimistic about the concept of the board and pleased with the caliber of its initial members, whose reputational heft could shift the power imbalance that Facebook’s critics have often faced.

‘These people have too much reputational capital to lose to just do Facebook’s bidding,” she said.
@rmack here: https://t.co/HQQH6Wqtfq@daphnehk here: https://t.co/QcnUta61nr@chinmayiarun and @AgustinaDelCamp here https://t.co/3tVjnahJZw@sheeraf here https://t.co/i2Ytw32aQD@alexstamos here https://t.co/XSvEFVn0YT

Aaand… @davidakaye here https://t.co/OWtiX1j64S

2/2

— Rasmus Kleis Nielsen (@rasmus_kleis) May 7, 2020


Hello! If you find yourself suddenly interested in the Facebook Oversight Board, because now it seems Real and Serious, here's some stuff I prepared earlier!

This is my most in-depth look:https://t.co/o18vs2K2V9 pic.twitter.com/E0T7Fz6AvG

— evelyn douek (@evelyndouek) May 6, 2020



The board members will be very much part-time (15 hours a month), and their pay hasn’t been disclosed.
Just asked a follow-up on how part-time this oversight board is going to be.

Members are committing to an avg of 15 hours a month.

20-40 (smart, qualified) people, putting in 15 hours a month, checking power of a global social media behemoth with 3 billion users.

good luck!

— Sarah Frier (@sarahfrier) May 6, 2020



Issie Lapowsky at Protocol writes that this model could go beyond Facebook:

It’s a bold idea for Facebook. But the board isn’t just for Facebook. In designing this new organization, Facebook’s leaders deliberately structured it so that it could have a life beyond the company. To do that, they formed a separate legal trust with an initial $130 million investment from Facebook.

But they also empowered that trust to both accept funding from sources outside Facebook and to form companies of its own. That structure would ensure Facebook CEO Mark Zuckerberg couldn’t just shut down the board if he didn’t like its decisions. But it also opens up the possibility that the trust might some day spin off additional oversight boards for, say, YouTube, Twitter or any other platform that makes content moderation decisions.

Tracking traffic back to WhatsApp. It’s tricky to do, but here’s how The Guardian did it — a thread:
1. There’s lots of talk about how WhatsApp can spread misinformation but few examples with data. So. A few days ago @guardian started to see some unusual referral to this piece from @heldavidson. To be clear, this is a piece of accurate, quality reporting https://t.co/PweJgok5Gd

— Chris Moran (@chrismoranuk) May 6, 2020




Illustration from L.M. Glackens’ The Yellow Press (1910) via The Public Domain Review. #COVID-19 #JudyMikovits #RegularPost #ThePlandemic #Coronavirus
https://www.niemanlab.org/2020/05/maybe-the-cdc-and-who-should-take-a-few-cues-from-the-plandemic/