Computer Ethics Paper 2

Due: Wednesday, June 20, 2018

Topic option 1: Facebook and Privacy

On March 16, 2018, FaceBook suspended the account of Cambridge Analytica, announcing that the latter had broken some of FB's data-usage and data-retention rules.

Just what went wrong here? Was there one step in the chain that was particularly bad? Or was it the whole sequence? Did CA take unfair advantage of a "loophole", or were they just following FB's rules? Or are we to blame, for not reading the FB Terms of Service carefully?

Here is what happened (dates updated according to

  1. In 2013, Aleksandr Kogan created an app, in the form of an online "quiz" or "survey". Users typically find these in their FB feed, and click on it, but technically it's a separate website. Kogan's app probably allowed Kogan to figure out the user's political affiliation.
  2. The app used the Facebook Login feature, which is way easier than having users create their own accounts, but which also gives the developer access to some of your FB data. And, of course, allows for later serving of ads to the relevant FB users.
  3. Back in 2013 (and apparently starting in 2007), FB also allowed app creators to access data of Friends, in the sense that if Alice is a Friend of Bob, and Alice "agrees" to share the FB data she can see, then Bob's visible data is included. Just how much Friend data was available is unclear (maybe you can find out!), but it may have included Like lists.
  4. Users running Kogan's app had to click "ok" on the Terms of Service, which spelled out the data sharing.
  5. Kogan got 270,000 people to run his app. Some of them might have been paid.
  6. If each of those people had 500 friends, that's around 500*270,000 = 135 million people total. The Friends lists overlapped, though, so the true number is between 30 and 50 million.
  7. Kogan shared his data with Cambridge Analytica. This may or may not have been legal. Cambridge Analytica is sort of an advertising firm, but it's a little different than the usual "ad network, data broker or other advertising or monetization-related service", which is FB's official Naughty List.
  8. Cambridge Analytica supposedly agreed to delete the data some years ago, but did not. Facebook never followed up to check.

So what should have been done differently? Does it come down to one step being a glaring mistake, or was it the result of many incremental mistakes? Was it an honest belief that more shared data would lead to a better user experience? FB has now stopped apps from acquiring data on Friends; after all, they did not consent. (FB stopped this in 2014, unless the Friend explicitly consented, but the data was out.) But note that if the original user consented to share his or her FB view of the world, that would include Friends. If you Friend someone, you know they will have access to your data.

Apps have always asked for permission to access your data. Many people don't read the ToS.

Facebook once had a vision that every app would be a form of social media, and that this would ultimately lead to benefits for users:

We thought that every app could be social. Your calendar should have your events and your friends birthdays, your maps should know where your friends live, your address book should show their pictures. It was a reasonable vision but it didn't materialize the way we had hoped.

This is from the Boz post below. Facebook's data sharing here was in a sense contrary to its own interests; it was arguably trying to address the interests and convenience of its users.

Mostly, FB does not sell data. It is not their business model. Their model is to gather the data, and let you buy ads according to criteria that the advertiser specifies. The advertiser normally never sees the data. See the "Boz" post below.

In your analysis, you can focus on the specific actions, or on general principles. The goal, however, should be to offer some practical advice for how companies can prevent awkward incidents like this. Was it a misunderstanding? A failure to realize how the data in question might actually end up being used? It's easy to give simplistic answers like "users should have their attorneys read the ToS before each click" or "Facebook should not share data with anyone", but those are impractical.

Note: Facebook has also been accused of giving phone manufacturers access to user data; see and But while there might have been inappropriate decisions here, the bottom line is that your device maker can always see whatever it is you are doing. If you don't trust them, you need to get a different phone.

Topic option 2: Defamation Policy

Section 230 of the Communications Decency Act has made it nearly impossible to take down user-posted content. Has 230 gone too far?

This law, along with the DMCA, has enabled the rise of third-party-content sites. Some of these are mainstream, such as YouTube and Wikipedia. Some, like Reddit, are known for the freedom provided for users to say what they want. And some, like The Dirty, are simply in the business of encouraging salacious gossip.

The original goal of 230, however, was to protect sites that did family-friendly editing. If you think that 230 still makes sense, try to identify the important principles behind 230 and defend them. If you think it has gone too far, outline some sort of alternative, and explain whether your alternative should be based on regulation -- and if so how this would be implemented -- or on voluntary website cooperation. As an example of the latter, note how Google has limited visibility of user posts on YouTube, and made it harder to post anonymously.

Here are a few things to keep in mind.

Compuserve escaped liability because they did no editing of user-posted content. Should this position continue to receive the highest protection, or should some limited editing of user-posted content be considered the norm?

Should new 230 rules require some element of family-friendly editing, or editing to attain some other socially appropriate goal? If a site engages in some form of editing of user-posted content, under what circumstances should the site escape liability?

Should sites that "encourage" inappropriate posts, by explicit and intentional policy, be held responsible?

Should there be a take-down requirement for disparaging posts? If so, how would you protect sites from celebrities or politicians who wanted nothing negative about them to appear on the Internet?

Keep in mind that a site can easily have a million posts a month, per employee (a number taken from Craigslist). A requirement to monitor all posts would likely end many of today's sites. Is there some middle ground?

Here's an article that contains several quotes from former Senator Chris Cox, one of the co-authors of 230, on how he sees it:

Note that the FOSTA/SESTA act (see the notes starting at has now created one limit on 230 protections. Should there be more? Or did FOSTA/SESTA go too far?

Your paper (either topic) will be graded primarily on organization (that is, how you lay out your sequence of paragraphs), focus (that is, whether you stick to the topic), and the nature and completeness of your arguments.

It is, as usual, essential that all material from other sources be enclosed in quotation marks (or set off as a block quote), and preferably with a citation to the original source as well.

Expected length: 3-5 pages (1000+ words)