Computer Ethics Paper 2              DRAFT

Preferred formats: .docx / .odt / .rtf

Topic option 1: Location Data

Location data collected by your phone has tremendous advertising value in that you can be shown ads for nearby things to do. Some -- maybe most -- weather apps are just fronts for collecting location. Dating apps all collect location (so they can match you with people nearby), as do a great many more. Some flashlight apps collect location data. This location data is then bought and sold on advertising markets. And yet it is trivial to de-anonymize, by looking where you spend most nights and most days.

The usual deal with location data is that

Either argue that the collection of your location data is, on the whole, being handled acceptably, or that it is not. If you take the latter view, you may (but are not required to) advocate for some restrictions on its use and collection. When analyzing the costs and benefits, try to be specific in identifying actual consequences of the use of location data, either by the police or by others. You can propose policies that might be put into place by phone vendors, or just standardized app policies.

If you feel that some rules should be in place on the collection and sale of location data, you can propose

For collection and sale of location data for commercial use, here are some things to think about (you don't have to write about any of them; these are just suggestions):

In another direction, should law enforcement be allowed to buy location data? Collecting it directly, after all, might qualify as a Fourth Amendment search, and thus require a warrant. Should apps be required to tell you if they would consider selling your location data to law enforcement? How about to the military?

One extension of this idea is that maybe apps that sell location data should be allowed (or encouraged) to create a classification system for location-data buyers. Categories might include "not law enforcement", "not bounty hunters", "location-based advertising only", etc. Of course, such categories would need some sort of enforcement mechanism, but you can assume that part has been worked out if you wish.



Topic option 2: Is it time to reform Section 230?

Is Section 230 still the right approach, or should it be modified? Should websites still have a free hand in content moderation?

You can either focus on 230(c)(1), which makes sites not liable for user-posted content, or 230(c)(2), which makes sites not liable for refusing to carry someone's content. Many courts have decided that 230(c)(1) implies 230(c)(2), but this isn't quite right: 230(c)(1) protects sites when they failed to take down some user-posted content, and 230(c)(2) protects them when they did take down some user-posted content. The overall goal was to protect sites that did family-friendly editing.

For 230(c)(1), there are several reform approaches. Perhaps if a site contributes to or encourages a certain kind of content, they should not entirely escape liability. Or perhaps there are certain kinds of content that sites should be required to monitor carefully, similar to sex trafficking. As things stand today, it is nearly impossible for outsiders and third parties to force the takedown of user-posted content.

230(c)(2) protects sites choosing to take down selected content, which, when the topic is political, is often viewed as censorship. Should this voluntary takedown power be limited? Here is the language of §230(c)(2):

No provider ... shall be held liable on account of any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected

This means that sites cannot be found liable for taking down content that doesn't fit their moderation guidelines, even if those guidelines are capricious or biased. Here's a good example of Google's capricious adherence to their moderation policies: arstechnica.com/gaming/2022/09/youtube-age-restriction-quagmire-exposed-by-78-minute-mega-man-documentary.

But if Facebook and Twitter are reclassified as "common carriers" to a degree, required to host spam and offensive content, what might be the impact on the platforms themselves?

In the era of Compuserve and Prodigy, there is no record of anyone ever complaining that their posts were unfairly deleted. But times have changed. Should Twitter be able to flag political posts with "editorial" judgements? Should sites be able to suppress user posts about a politically charged story?

The US Department of Justice, under the Trump administration, proposed modifying §230(c)(2) to eliminate the "otherwise objectionable" language above with more specific language about promoting terrorism and other outright unlawful content. Similarly, the "good faith" rule would be modified to cover only moderation decisions consistent with the site's terms of service. (The DoJ proposals are here.) This might have required, for example, that sites agree to host "hate speech". You don't have to agree with the DoJ approach, but are these proposals even partly on the right track? Should the moderation decisions of Facebook and Twitter be something that users can contest? Once upon a time, Twitter and Facebook were niche, offbeat tools. Today, if you're not present there, your voice is lost.

An example of the 230(c)(2) issue is the story published by the New York Post on October 14, 2020 about Hunter Biden's emails allegedly recovered from an old laptop: nypost.com/2020/10/14/hunter-biden-emails-show-leveraging-connections-with-dad-to-boost-burisma-pay. The story was blocked on Twitter, and downranked in newsfeeds on Facebook. Stated reasons were that the story was based on hacked personal information, and its sources were not well authenticated.

One problem with mandatory "free speech" policies is that spam would be covered too. So would the kind of offensive content that chases away users and, perhaps more importantly, advertisers. And there were lots of Twitter users who never went near political speech, but who were nonetheless very dissatisfied by the continued presence of Donald Trump on Twitter, and who were eager proponents of having him kicked off. How should Twitter have to deal with two large groups of uses, each of which is threatening to leave, and each with incompatible demands? Social media sites have solid business reasons for their moderation policies; to what extent should the government be able to overrule these policies? See especially the article How A Social Network Fails, which begins with the premise "a social platform needs to provide a positive user experience. People have to like it." The article attributes most of the problems of X/Twitter to failing to understand this.

You can talk about either 230(c)(1) or 230(c)(2), or both. If you have strong feelings about one or the other, I recommend focusing on that.



Your paper (either topic) will be graded primarily on organization (that is, how you lay out your sequence of paragraphs), focus (that is, whether you stick to the topic), and the nature and completeness of your arguments. It is a good idea to address the potential consequences of your proposal.

It is, as usual, essential that all material from other sources be enclosed in quotation marks (or set off as a block quote), and preferably with a citation to the original source as well.

Expected length: 3-5 pages (1000+ words)