Computer Ethics Paper 2              DRAFT

Preferred formats: .docx / .odt / .rtf

Topic option 1: The Mosaic Theory of Privacy

The mosaic theory, from the DC Circuit opinion in the Antoine Jones case (technically US v Maynard), is that, while a few observations might be ok, a lot of observations might be qualitatively different. GPS tracking for a day might not be a search, or at least not a significant one, but GPS tracking over a month is much more invasive.The DC Circuit used this theory to find that the 28-day GPS monitoring of Jones was a warrantless search. They wrote

Prolonged surveillance reveals types of information not revealed by short-term surveillance, such as what a person does repeatedly, what he does not do, and what he does ensemble. These types of information can each reveal more about a person than does any individual trip viewed in isolation. Repeated visits to a church, a gym, a bar, or a bookie tell a story not told by any single visit, as does one's not visiting any of these places over the course of a month. The sequence of a person's movements can reveal still more; a single trip to a gynecologist's office tells little about a woman, but that trip followed a few weeks later by a visit to a baby supply store tells a different story._ A person who knows all of another's travels can deduce whether he is a weekly church goer, a heavy drinker, a regular at the gym, an unfaithful husband, an outpatient receiving medical treatment, an associate of particular individuals or political groups — and not just one such fact about a person, but all such facts.

The Supreme Court ignored the mosaic theory, instead basing their decision on the fact that the police trespassed on Jones' property by placing the GPS tracker on his vehicle while it was parked in his driveway. The Supreme Court has a long history of deciding cases on the narrower grounds rather than broader.

But do we need the Mosaic Theory after all? Or are we at least likely to need it in the near future? That is, should extended collection of routine data require a warrant, even if brief collection of the same data does not? After all, there is absolutely no doubt, in both a technical and a practical sense, that extended data collection reveals a lot more about a person than brief collection. One location track shows someone went to the doctor; multiple tracks might explain why. One observation of drug possession doesn't rule out an occasional recreational user, but a sufficient series might indicate a dealer. One drone overflight showing a lot of vehicles might indicate a party; multiple flights might establish an illegal junkyard. Information from dating apps, similarly, might realize a very different picture over time.

Arguments in favor usually focus on the idea that, yes, lots of cross-referenced surveillance yields qualitatively more information about a person that isolated snippets. See www.lawfaremedia.org/article/defense-mosaic-theory for a more thorough defense. While the Supreme Court decided Carpenter based on a different standard, the mosaic approach might be a little more general here and a little less ad hoc; while the Carpenter case was before the Supreme Court, many people argued in favor of it (including the Lawfare article author) based on the mosaic idea. The Carpenter rules specify only that the police cannot access a person's location history without a warrant, but a mosaic-based ruling would eliminate the distinction between historical and real-time data collection. The Supreme Court has struggled with the underlying theory in Fourth Amendment cases in the past decade, but the mosaic theory provides a consistent and unifying approach.

As for arguments against, one important one is that this approach is unlikely to lead to clear-cut guidelines. One judge might say thirty days of location-data collection is too long, but another might rule that a week is too long. But do we really need clear guidelines? Judges are, after all, used to judging, and eventually judges do produce, through adherence to precedent, reasonably uniform standards.



Topic option 2: Is it time to reform Section 230?

Is Section 230 still the right approach, or should it be modified? Should websites still have a free hand in content moderation?

You can either focus on 230(c)(1), which makes sites not liable for user-posted content, or 230(c)(2), which makes sites not liable for refusing to carry someone's content. Many courts have decided that 230(c)(1) implies 230(c)(2), but this isn't quite right: 230(c)(1) protects sites when they failed to take down some user-posted content, and 230(c)(2) protects them when they did take down some user-posted content. The overall goal was to protect sites that did family-friendly editing.

For 230(c)(1), there are several reform approaches. Perhaps if a site contributes to or encourages a certain kind of content, they should not entirely escape liability. Or perhaps there are certain kinds of content that sites should be required to monitor carefully, similar to sex trafficking. As things stand today, it is nearly impossible for outsiders and third parties to force the takedown of user-posted content.

230(c)(2) protects sites choosing to take down selected content, which, when the topic is political, is often viewed as censorship. Should this voluntary takedown power be limited? Here is the language of §230(c)(2):

No provider ... shall be held liable on account of any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected

This means that sites cannot be found liable for taking down content that doesn't fit their moderation guidelines, even if those guidelines are capricious or biased. Here's a good example of Google's capricious adherence to their moderation policies: arstechnica.com/gaming/2022/09/youtube-age-restriction-quagmire-exposed-by-78-minute-mega-man-documentary.

But if Facebook and Twitter are reclassified as "common carriers" to a degree, required to host spam and offensive content, what might be the impact on the platforms themselves?

In the era of Compuserve and Prodigy, there is no record of anyone ever complaining that their posts were unfairly deleted. But times have changed. Should Twitter be able to flag political posts with "editorial" judgements? Should sites be able to suppress user posts about a politically charged story?

The US Department of Justice, under the Trump administration, proposed modifying §230(c)(2) to eliminate the "otherwise objectionable" language above with more specific language about promoting terrorism and other outright unlawful content. Similarly, the "good faith" rule would be modified to cover only moderation decisions consistent with the site's terms of service. (The DoJ proposals are here.) This might have required, for example, that sites agree to host "hate speech". You don't have to agree with the DoJ approach, but are these proposals even partly on the right track? Should the moderation decisions of Facebook and Twitter be something that users can contest? Once upon a time, Twitter and Facebook were niche, offbeat tools. Today, if you're not present there, your voice is lost.

An example of the 230(c)(2) issue is the story published by the New York Post on October 14, 2020 about Hunter Biden's emails allegedly recovered from an old laptop: nypost.com/2020/10/14/hunter-biden-emails-show-leveraging-connections-with-dad-to-boost-burisma-pay. The story was blocked on Twitter, and downranked in newsfeeds on Facebook. Stated reasons were that the story was based on hacked personal information, and its sources were not well authenticated.

One problem with mandatory "free speech" policies is that spam would be covered too. So would the kind of offensive content that chases away users and, perhaps more importantly, advertisers. And there were lots of Twitter users who never went near political speech, but who were nonetheless very dissatisfied by the continued presence of Donald Trump on Twitter, and who were eager proponents of having him kicked off. How should Twitter have to deal with two large groups of uses, each of which is threatening to leave, and each with incompatible demands? Social media sites have solid business reasons for their moderation policies; to what extent should the government be able to overrule these policies? See especially the article How A Social Network Fails, which begins with the premise "a social platform needs to provide a positive user experience. People have to like it." The article attributes most of the problems of X/Twitter to failing to understand this.

You can talk about either 230(c)(1) or 230(c)(2), or both. If you have strong feelings about one or the other, I recommend focusing on that.



Your paper (either topic) will be graded primarily on organization (that is, how you lay out your sequence of paragraphs), focus (that is, whether you stick to the topic), how well and how consistently you support your position, and the nature and completeness of your arguments. It is a good idea to address the potential consequences of your proposal.

It is, as usual, essential that all material from other sources be enclosed in quotation marks (or set off as a block quote), and preferably with a citation to the original source as well.

Expected length: 3-5 pages (1000+ words)