Is it time for the US to regulate social-media privacy, the way the EU talks about? Is it time, in other words, for the government to get involved in what Facebook can and cannot do?
Facebook clearly needs to be able to run ads for users in order to pay the bills. Should users be allowed to opt-out of personal data collection entirely, leading to generic ads? That might significantly undermine Facebook's advertising revenue. At a broader level, though, maybe there should be limits, or at least notification. For example, it is relatively obvious to users that if they "like" something, Facebook may use that as evidence of advertising preferences. But what about if someone goes to a website unrelated to Facebook? Facebook can tell the user was there, due to third-party cookies on that website, but that is not at all obvious to users.
At a more moderate level, there is widespread sentiment in Europe that Facebook should be regulated to prevent the recurrence of something like the Cambridge Analytica scandal. In that event, Facebook (a) allowed users to agree to the export of data about their Friends, and (b) failed to audit the data that was downloaded. And there is also (c): users weren't fully informed of the potential consequences of participating in the original quiz, though to be fair it's likely that nobody knew those consequences at the time.
The EU already has implemented their General Data Protection Regulation, requiring opt-in consent for data collection. So far this is quite generic, but this approach does have potential to at least let people know when Facebook is privy to information about their general browsing.
Another way to look at all this is that Facebook is quite transparent about how they sell ads. They are, however, much less transparent about how they monitor what users do off the site, and about how they use your email address and phone number as a client identifier. Should the government step in to regulate the level of transparency across the board, even if the actual advertising techniques don't change?
As an example more directly tied to the use of facebook.com itself, right now if you keep checking a Friend's page, that Friend has no way of knowing: your privacy in "lurking" is safe. But Facebook could easily change that, even retroactively. Should they be able to do that without any regulation or restraint?
There are a few classic drawbacks to regulation. One is that regulation is inevitably politicized; regulation during an administration that disapproves of Facebook is likely to be more onerous. Another is that innovation at Facebook might be inhibited, potentially leading to a lack of future advances in advertising that might very well be less intrusive or annoying. Eventually, Facebook might be unable to respond to new areas of competition (the next Snapchat, for example), and would lose market dominance. Finally, there is regulatory capture, where the regulators end up feeling more beholden to the industry than to consumers. The FAA and the FCC both exhibit this.
We've seen that §230 of the Communications Decency Act has made it nearly impossible for outsiders and third parties to force the takedown of user-posted content. It also protects sites choosing to take down selected content, which, when the topic is political, is often viewed as censorship.
Should the voluntary takedown power be limited? This is the part originally addressed in §230(c)(2):
No provider ... shall be held liable on account of any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protectedThe goal of §230 was to protect sites that did family-friendly editing. There were two parts; the first, and best-known, is that sites aren't liable for hosting user-posted content. But there is also the second (the passage quoted above): sites cannot be found liable for taking down content that doesn't fit their moderation guidelines, even if those guidelines are capricious or biased.
The US Department of Justice has proposed modifying §230(c)(2) to eliminate the "otherwise objectionable" language above with more specific language about promoting terrorism and other outright unlawful content. Similarly, the "good faith" rule would be modified to cover only moderation decisions consistent with the site's terms of service. (The DoJ proposals are here.)
You don't have to agree with the DoJ approach, but are these proposals even partly on the right track? Should the moderation decisions of Facebook and Twitter be something that users can contest? Once upon a time, Twitter and Facebook were niche, offbeat tools. Today, if you're not present there, your voice is lost.
An example of this issue is the story published by the New York Post on October 14, 2020 about Hunter Biden's emails allegedly recovered from an old laptop: nypost.com/2020/10/14/hunter-biden-emails-show-leveraging-connections-with-dad-to-boost-burisma-pay. The story was blocked on Twitter, and downranked in newsfeeds on Facebook. Stated reasons were that the story was based on hacked personal information, and its sources were not well authenticated.
Here is an interesting article addressing what Twitter's actions said about the relationship today between tech and journalism: palladiummag.com/2020/10/19/the-centralized-internet-is-inevitable, which declares that, with Twitter's blockade, " in an instant, the authority of Western newspapers was forever reduced". However, the article does not acknowledge the idea that, just like journalists have a right not to cover stories that are poorly sourced and based on hacked information, maybe Twitter does too. By the time I write this, it does appear that the emails are mostly genuine.