Computer Ethics Paper 2

Submission in a Word-type format (.docx, .doc, .odt, .rtf) is preferred.

Topic option 1: Advertisers and Browser Privacy

Advertisers collect a lot of information about us as we browse the Internet. We've all seen those GDPR cookie-policy notices on websites. Is it time for the US to create its own regulations on user privacy and advertisers? In other words, for the US government to get involved in what Facebook, Google and others can and cannot do?

Facebook clearly needs to be able to run ads for users in order to pay the bills. Should users be allowed to opt-out of personal data collection entirely, leading to generic ads? That would significantly undermine Facebook's advertising revenue.

The same concerns apply to smaller websites, such as media sites. They too rely on advertising revenue, and too much opting out of data collection will cut into their revenues as well. However, one big difference is that sites like Facebook require users to log in, and thus they have the user's real identity, while sites that just run advertising generally do not. The latter usually have trackers -- based on third-party cookies -- that collect user data and send it to some advertising network; the largest such network by far is Google's. This tracker data, at the level of a single site, is anonymous, though over a large number of sites the identity of the user can become clear, and for advertising purposes the real name of the user isn't usually important anyway. There are also a few advertising networks besides Facebook and Google that harvest advertising data.

Some GDPR notices are a flat "allow cookies or don't visit"; others (more compliant with the actual terms of the GDPR) give you some choices. But the choices are confusing to make.

Should there be rules regarding this widespread tracking of user Internet visits?

Should there be rules on the use of "fingerprinting" technology to identify website visitors by their browser, rather than via cookies?

Google Search is another source of information about users. Should there be rules about the use of this data, which is often exceptionally sensitive?

Are there special rules that should apply to Facebook and other social-media sites, which collect data in ways ordinary advertisers can only dream about? For example, social-media sites collect data from user-posted content, from "likes" and "retweets", and from off-site tracking where your actual identity is known. Should Facebook, for example, have to make it clearer that they collect information about visits to websites unrelated to Facebook? Facebook can tell the user was there, due to its third-party cookies on that website, but many users are unaware of that, and there is no explicit opt-out. At another level, there is widespread sentiment in Europe that Facebook should be regulated to prevent the recurrence of something like the Cambridge Analytica scandal. In that event, Facebook (a) allowed users to agree to the export of data about their Friends, and (b) failed to audit the data that was downloaded. And there is also (c): users weren't fully informed of the potential consequences of participating in the original quiz, though to be fair it's likely that nobody knew those consequences at the time.

There are many possible forms of regulation. Sites can make some forms of data gathering optional, or at least opt-out. Sites can be required to tell users who their data was shared with. Sites can be required to allow users to view their data, or delete their data.

Yet another approach would be to require that tracking code honor various opt-out mechanisms, such as the Do Not Track option.

Note that some of these forms of regulation (such as the last one) require that the sites be informed of your real identity, and, once they know that, your privacy might be worse off.

There are a few classic drawbacks to regulation. One is that regulation is inevitably politicized; regulation during an administration that disapproves of Facebook is likely to be more onerous. Another is that innovation might be inhibited, potentially leading to a lack of future advances in advertising that might very well be less intrusive or annoying. Finally, there is regulatory capture, where the regulators end up feeling more beholden to the industry than to consumers. The FAA and the FCC both exhibit this.



Topic option 2: Defamation Policy

Section 230 of the Communications Decency Act has made it nearly impossible to take down user-posted content. Has §230 gone too far?

This law, along with the DMCA, has enabled the rise of third-party-content sites. Some of these are mainstream, such as YouTube and Wikipedia. Some, like Reddit, are known for the freedom provided for users to say what they want. And some, like The Dirty, are simply in the business of encouraging salacious gossip.

The original goal of §230, however, was to protect sites that did family-friendly editing. If you think that §230 still makes sense, try to identify the important principles behind §230 and defend them. If you think it has gone too far, outline some sort of alternative, and explain whether your alternative should be based on regulation -- and if so how this would be implemented -- or on voluntary website cooperation. As an example of the latter, note how Google has limited visibility of user posts on YouTube, and made it harder to post anonymously.

Here are a few things to keep in mind.

Compuserve escaped liability because they did no editing of user-posted content. Should this position continue to receive the highest protection, or should some limited editing of user-posted content be considered the norm?

Should new §230 rules require some element of editing to attain some socially beneficial goal, such as reducing hostility or making a site more family friendly? If a site engages in some form of editing of user-posted content, are there any circumstances in which the site would not escape liability completely?

Should sites that "encourage" mean-spirited or socially unproductive posts, by explicit and intentional policy, be held responsible?

Should there be a take-down requirement for disparaging posts? If so, how would you protect sites from celebrities or politicians who wanted nothing negative about them to appear on the Internet?

Should there be special rules for larger companies? Recall that even monitoring user-contributed content with software may be prohibitively expensive for startups.

Here's an article that contains several quotes from former Senator Chris Cox, one of the co-authors of §230, on how he sees it: www.npr.org/sections/alltechconsidered/2018/03/21/591622450/section-230-a-key-legal-shield-for-facebook-google-is-about-to-change.


Your paper (either topic) will be graded primarily on organization (that is, how you lay out your sequence of paragraphs), focus (that is, whether you stick to the topic), and the nature and completeness of your arguments. It is a good idea to address the potential consequences of your proposal.

It is, as usual, essential that all material from other sources be enclosed in quotation marks (or set off as a block quote), and preferably with a citation to the original source as well.

Expected length: 3-5 pages (1000+ words)