Computer Ethics Paper 2

Preferred formats: .docx / .odt / .rtf

Topic option 1: Facial Recognition

Discuss the issues relating to widespread government use of facial-recognition software in public places (streets, theaters, malls). Is facial recognition a useful tool for society, and for police work? Or does its use need to be severely curtailed, through regulation?

At this point, facial-recognition software is very effective, usually accurate to 0.1%. It is no longer experimental. It appears that issues of racial bias (eg poorer recognition for some groups) has been solved, at least in software (there are still issues with collecting images and the subject's overall skin lightness or darkness).

Not that long ago, you were effectively anonymous walking around the city streets, outside your immediate neighborhood. Now you are not

Clearview is a facial-recognition product nominally sold only to police. Should there be legal restrictions on its use by law enforcement? Note that facial-recognition software can be used to identify most individuals in a crowd photograph, such as of a BLM protest or the January 6 Capitol riot. What are the civil-liberties implications of the use of facial-recognition software by police? Are there privacy concerns aside from traditional civil liberties? There are concerns that people will be falsely implicated and arrested by facial-recognition errors, but other identification methods used by the police (including eyewitnesses) seem in fact to be much less reliable.

There is also the question of availability of facial-recognition software to everyone else, outside the police. Who is affected here? If a stranger sees you on the street and takes a picture, they can presumably find your real identity. This can lead, for example, to stalking. At one point, Facebook seemed on the verge of introducing this, but then they backed off.

Store and business cameras may tag some facial images as present in the store at the time of an incident, such as shoplifting or some other disturbance. Will this lead to discrimination?

Here are some special cases to think about:

Either argue that facial-recognition software is, on the whole, beneficial for society, or that it is not. If you take the latter view, you may (but are not required to) advocate for some restrictions on its use.



Topic option 2: Is it time to reform Section 230?

Is Section 230 still the right approach, or should it be modified?

You can either focus on 230(c)(1), which makes sites not liable for user-posted content, or 230(c)(2), which makes sites not liable for refusing to carry someone's content. Many courts have decided that 230(c)(1) implies 230(c)(2), but this isn't quite right: 230(c)(1) protects sites when they failed to take down some user-posted content, and 230(c)(2) protects them when they did take down some user-posted content. The overall goal was to protect sites that did family-friendly editing.

For 230(c)(1), there are several reform approaches. Perhaps if a site contributes to or encourages a certain kind of content, they should not entirely escape liability. Or perhaps there are certain kinds of content that sites should be required to monitor carefully, similar to sex trafficking. As things stand today, it is nearly impossible for outsiders and third parties to force the takedown of user-posted content.

230(c)(2) protects sites choosing to take down selected content, which, when the topic is political, is often viewed as censorship. Should this voluntary takedown power be limited? Here is the language of §230(c)(2):

No provider ... shall be held liable on account of any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected

This means that sites cannot be found liable for taking down content that doesn't fit their moderation guidelines, even if those guidelines are capricious or biased. But if Facebook and Twitter become "common carriers" to a degree, required to host offensive content, what might be the impact on the platforms themselves?

In the era of Compuserve and Prodigy, there is no record of anyone ever complaining that their posts were unfairly deleted. But times have changed. Should Twitter be able to flag political posts with "editorial" judgements? Should sites be able to suppress user posts about a politically charged story?

The US Department of Justice, under the Trump administration, proposed modifying §230(c)(2) to eliminate the "otherwise objectionable" language above with more specific language about promoting terrorism and other outright unlawful content. Similarly, the "good faith" rule would be modified to cover only moderation decisions consistent with the site's terms of service. (The DoJ proposals are here.) This might have required, for example, that sites agree to host "hate speech". You don't have to agree with the DoJ approach, but are these proposals even partly on the right track? Should the moderation decisions of Facebook and Twitter be something that users can contest? Once upon a time, Twitter and Facebook were niche, offbeat tools. Today, if you're not present there, your voice is lost.

An example of the 230(c)(2) issue is the story published by the New York Post on October 14, 2020 about Hunter Biden's emails allegedly recovered from an old laptop: nypost.com/2020/10/14/hunter-biden-emails-show-leveraging-connections-with-dad-to-boost-burisma-pay. The story was blocked on Twitter, and downranked in newsfeeds on Facebook. Stated reasons were that the story was based on hacked personal information, and its sources were not well authenticated.

You can talk about either 230(c)(1) or 230(c)(2), or both. If you have strong feelings about one or the other, I recommend focusing on that.



Your paper (either topic) will be graded primarily on organization (that is, how you lay out your sequence of paragraphs), focus (that is, whether you stick to the topic), and the nature and completeness of your arguments. It is a good idea to address the potential consequences of your proposal.

It is, as usual, essential that all material from other sources be enclosed in quotation marks (or set off as a block quote), and preferably with a citation to the original source as well.

Expected length: 3-5 pages (1000+ words)