Preferred formats: .docx / .odt / .rtf
In recent years, facial recognition software -- that is, software that can match an image of a person against a database of face images, and thus identify the person -- has become quite accurate. Discuss the issues relating to widespread government use of facial-recognition software in public places (streets, theaters, malls). Is facial recognition a useful tool for society, and for police work? Or does its use need to be severely curtailed, through regulation? Who, if anyone, is unfairly affected by it?
At this point, facial-recognition software is very effective, usually accurate to 0.1%. It is no longer experimental. It appears that issues of racial bias (eg poorer recognition for some groups) has been solved, at least in software (there are still issues with collecting images and the subject's overall skin lightness or darkness).
Not that long ago, you were effectively anonymous walking around the city streets, outside your immediate neighborhood. Now you are not
Clearview is a facial-recognition product nominally sold only to police. Should there be legal restrictions on its use by law enforcement? Note that facial-recognition software can be used to identify most individuals in a crowd photograph, such as of a BLM protest or the January 6 Capitol riot. What are the civil-liberties implications of the use of facial-recognition software by police? Are there privacy concerns aside from traditional civil liberties? There are concerns that people will be falsely implicated and arrested by facial-recognition errors, but other identification methods used by the police (including eyewitnesses) seem in fact to be much less reliable.
There is also the question of availability of facial-recognition software to everyone else, outside the police. Who is affected here? If a stranger sees you on the street and takes a picture, they can presumably find your real identity. This can lead, for example, to stalking. At one point, Facebook seemed on the verge of introducing this, but then they backed off.
Store and business cameras may tag some facial images as present in the store at the time of an incident, such as shoplifting or some other disturbance. Will this lead to discrimination?
Here are some special cases to think about:
Either argue that facial-recognition software is, on the whole, beneficial for society, or that it is not. If you take the latter view, you may (but are not required to) advocate for some restrictions on its use. When analyzing the costs and benefits, try to be specific in identifying actual consequences of the use of facial-recognition software, either by the police or by others.
Is Section 230 still the right approach, or should it be modified? Should websites still have a free hand in content moderation?
You can either focus on 230(c)(1), which makes sites not liable for user-posted content, or 230(c)(2), which makes sites not liable for refusing to carry someone's content. Many courts have decided that 230(c)(1) implies 230(c)(2), but this isn't quite right: 230(c)(1) protects sites when they failed to take down some user-posted content, and 230(c)(2) protects them when they did take down some user-posted content. The overall goal was to protect sites that did family-friendly editing.
For 230(c)(1), there are several reform approaches. Perhaps if a site contributes to or encourages a certain kind of content, they should not entirely escape liability. Or perhaps there are certain kinds of content that sites should be required to monitor carefully, similar to sex trafficking. As things stand today, it is nearly impossible for outsiders and third parties to force the takedown of user-posted content.
230(c)(2) protects sites choosing to take down selected content, which, when the topic is political, is often viewed as censorship. Should this voluntary takedown power be limited? Here is the language of §230(c)(2):
No provider ... shall be held liable on account of any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected
This means that sites cannot be found liable for taking down content that doesn't fit their moderation guidelines, even if those guidelines are capricious or biased. Here's a good example of Google's capricious adherence to their moderation policies: arstechnica.com/gaming/2022/09/youtube-age-restriction-quagmire-exposed-by-78-minute-mega-man-documentary.
But if Facebook and Twitter are reclassified as "common carriers" to a degree, required to host spam and offensive content, what might be the impact on the platforms themselves?
In the era of Compuserve and Prodigy, there is no record of anyone ever complaining that their posts were unfairly deleted. But times have changed. Should Twitter be able to flag political posts with "editorial" judgements? Should sites be able to suppress user posts about a politically charged story?The US Department of Justice, under the Trump administration, proposed modifying §230(c)(2) to eliminate the "otherwise objectionable" language above with more specific language about promoting terrorism and other outright unlawful content. Similarly, the "good faith" rule would be modified to cover only moderation decisions consistent with the site's terms of service. (The DoJ proposals are here.) This might have required, for example, that sites agree to host "hate speech". You don't have to agree with the DoJ approach, but are these proposals even partly on the right track? Should the moderation decisions of Facebook and Twitter be something that users can contest? Once upon a time, Twitter and Facebook were niche, offbeat tools. Today, if you're not present there, your voice is lost.
An example of the 230(c)(2) issue is the story published by the New York Post on October 14, 2020 about Hunter Biden's emails allegedly recovered from an old laptop: nypost.com/2020/10/14/hunter-biden-emails-show-leveraging-connections-with-dad-to-boost-burisma-pay. The story was blocked on Twitter, and downranked in newsfeeds on Facebook. Stated reasons were that the story was based on hacked personal information, and its sources were not well authenticated.
One problem with mandatory "free speech" policies is that spam would be covered too. So would the kind of offensive content that chases away users and, perhaps more importantly, advertisers. And there were lots of Twitter users who never went near political speech, but who were nonetheless very dissatisfied by the continued presence of Donald Trump on Twitter, and who were eager proponents of having him kicked off. How should Twitter have to deal with two large groups of uses, each of which is threatening to leave, and each with incompatible demands? Social media sites have solid business reasons for their moderation policies; to what extent should the government be able to overrule these policies? See especially the article How A Social Network Fails, which begins with the premise "a social platform needs to provide a positive user experience. People have to like it." The article attributes most of the problems of X/Twitter to failing to understand this.
You can talk about either 230(c)(1) or 230(c)(2), or both. If you have strong feelings about one or the other, I recommend focusing on that.