Computer Ethics Paper 2

Topic option 1: Google, Facebook and Privacy

Advertisers collect a lot of information about us as we browse the Internet. We've all seen those GDPR cookie-policy notices on websites. Is it time for the US to create its own regulations on user privacy and advertisers? In other words, for the US government to get involved in what Facebook, Google and others can and cannot do?

Both Google and Facebook clearly need to be able to run ads in order to pay the bills. With Google, ads are shown in search results and whenever you visit a site that hosts Google ads; on Facebook, ads are shown when you are on Facebook. Should users be allowed to opt-out of personal data collection entirely, leading to generic ads? That might significantly undermine both companies' advertising revenue. At a broader level, though, maybe there should be limits, or at least notification. For example, it is relatively obvious to users that if they "like" something, Facebook may use that as evidence of advertising preferences. But what about if someone goes to a website unrelated to Facebook? Facebook can often tell the user was there, due to third-party cookies on that website, but that is not at all obvious to users. Google, similarly, can track you no matter where you go on the web. They apparently try very hard to keep tabs on whether you actually bought something you saw an ad for.

At a more moderate level, there is widespread sentiment in Europe that Facebook in particular should be regulated to prevent the recurrence of something like the Cambridge Analytica scandal. In that event, Facebook (a) allowed users to agree to the sharing of data about their Friends, (b) allowed the data to be exported to the app creator, and (c) failed to audit the data that was downloaded. And there is also (d): users weren't fully informed of the potential consequences of participating in the original quiz, though to be fair it's likely that nobody knew those consequences at the time. (But keep in mind that it is never in Facebook's interest to allow the export of large quantities of their data; the fact that Facebook allowed this suggests that they did not understand the ramifications.)

The EU already has implemented their General Data Protection Regulation, requiring opt-in consent for data collection. So far this is quite generic, but this approach does have potential to at least let people know when Google or Facebook (or others) is privy to information about their general browsing.

Should there be rules regarding the widespread tracking of user Internet visits? If so, how would the notification work? Like GDPR cookie notices? Should there be rules on the use of "fingerprinting" technology to identify website visitors by their browser, rather than via cookies? Should there be a Do Not Track option that was legally binding?

Facebook collects very personal information submitted to its website; Google collects very personal search histories. Should these personal kinds of sources be specifically protected?

Another way to look at all this is in terms of transparency. Google and Facebook are both quite transparent about how they sell ads. They are, however, much less transparent about how they monitor what users do at third-party sites, and about how they use your email address and phone number as a client identifier. Should the government step in to regulate the level of transparency across the board, even if the actual advertising techniques don't change?

As an example more directly tied to the use of facebook.com itself, right now if you keep checking a Friend's page, that Friend has no way of knowing: your privacy in "lurking" is safe. But Facebook could easily change that, even retroactively. Should they be able to do that without any regulation or restraint?

There are a few classic drawbacks to regulation. One is that regulation is inevitably politicized; regulation during an administration that disapproves of one of these companies is likely to be more onerous. Another is that innovation might be inhibited, potentially leading to a lack of future advances in advertising that might potentially be less intrusive or annoying. Despite its size, Facebook faces considerable competitive pressure, and some regulations might leave it unable to respond to new competitors (to TikTok, for example). At the present time Google is a bit more secure, but ultimately the same applies to it as well. Finally, there is regulatory capture, where the regulators end up feeling more beholden to the industry than to consumers. The FAA and the FCC both exhibit this.

If you wish, you can also focus on the collection and use of GPS location data. This is collected mostly by phone apps and manufacturers. There are no restrictions on how often location data is collected (for map applications, locations must be collected very frequently, but not so much for weather apps). There are no restrictions on how data can be sold and resold later. If data is not sold, owners might still allow clients to run custom queries on the data; there are no restrictions on this either. There are certainly no restrictions on how purchased location data can be used. Do we need any new legal restrictions? Would clearer policies on location data be useful? Should location data only be collected while apps are active?  Note that location data can be very important to advertisers, to allow advertising of nearby things to do, and also to determine whether users who saw an ad for a store later went into that store.


Topic option 2: Is it time to reform Section 230?

Is Section 230 still the right approach, or should it be modified?

You can either focus on 230(c)(1), which makes sites not liable for user-posted content, or 230(c)(2), which makes sites not liable for refusing to carry someone's content. Many courts have decided that 230(c)(1) implies 230(c)(2), but this isn't quite right: 230(c)(1) protects sites when they failed to take down some user-posted content, and 230(c)(2) protects them when they did take down some user-posted content. The overall goal was to protect sites that did family-friendly editing.

For 230(c)(1), there are several reform approaches. Perhaps if a site contributes to or encourages a certain kind of content, they should not entirely escape liability. Or perhaps there are certain kinds of content that sites should be required to monitor carefully, similar to sex trafficking. As things stand today, it is nearly impossible for outsiders and third parties to force the takedown of user-posted content.

230(c)(2) protects sites choosing to take down selected content, which, when the topic is political, is often viewed as censorship. Should this voluntary takedown power be limited? Here is the language of §230(c)(2):

No provider ... shall be held liable on account of any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected

This means that sites cannot be found liable for taking down content that doesn't fit their moderation guidelines, even if those guidelines are capricious or biased. But if Facebook and Twitter become "common carriers" to a degree, required to host offensive content, what might be the impact on the platforms themselves?

In the era of Compuserve and Prodigy, there is no record of anyone ever complaining that their posts were unfairly deleted. But times have changed. Should Twitter be able to flag political posts with "editorial" judgements? Should sites be able to suppress user posts about a politically charged story?

The US Department of Justice, under the Trump administration, proposed modifying §230(c)(2) to eliminate the "otherwise objectionable" language above with more specific language about promoting terrorism and other outright unlawful content. Similarly, the "good faith" rule would be modified to cover only moderation decisions consistent with the site's terms of service. (The DoJ proposals are here.) This might have required, for example, that sites agree to host "hate speech". You don't have to agree with the DoJ approach, but are these proposals even partly on the right track? Should the moderation decisions of Facebook and Twitter be something that users can contest? Once upon a time, Twitter and Facebook were niche, offbeat tools. Today, if you're not present there, your voice is lost.

An example of the 230(c)(2) issue is the story published by the New York Post on October 14, 2020 about Hunter Biden's emails allegedly recovered from an old laptop: nypost.com/2020/10/14/hunter-biden-emails-show-leveraging-connections-with-dad-to-boost-burisma-pay. The story was blocked on Twitter, and downranked in newsfeeds on Facebook. Stated reasons were that the story was based on hacked personal information, and its sources were not well authenticated.

You can talk about either 230(c)(1) or 230(c)(2), or both. If you have strong feelings about one or the other, I recommend focusing on that.



Your paper (either topic) will be graded primarily on organization (that is, how you lay out your sequence of paragraphs), focus (that is, whether you stick to the topic), and the nature and completeness of your arguments. It is a good idea to address the potential consequences of your proposal.

It is, as usual, essential that all material from other sources be enclosed in quotation marks (or set off as a block quote), and preferably with a citation to the original source as well.

Expected length: 3-5 pages (1000+ words)