Computer Ethics Paper 2

Due: Friday, June 21, 2019

Topic option 1: Facebook and Privacy

On March 16, 2018, FaceBook suspended the account of Cambridge Analytica, announcing that the latter had broken some of FB's data-usage and data-retention rules.

Just what went wrong here? Was there one step in the chain that was particularly bad? Or was it the whole sequence? Did CA take unfair advantage of a "loophole", or were they just following FB's rules? Was FB being reckless, or were they following a vision where "every app could be social"? Or are we to blame, for not reading the FB Terms of Service carefully?

Here are the basic facts (dates updated according to facebook.com/zuck/posts/10104712037900071):

  1. In 2013, Aleksandr Kogan created an app, in the form of an online "quiz" or "survey". Users typically find these in their FB feed, and click on it, but technically it's a separate website. Kogan's app probably allowed Kogan to figure out the user's political affiliation.
  2. The app used the Facebook Login feature, which is way easier than having users create their own accounts, but which also gives the developer access to some of your FB data. And, of course, allows for later serving of ads to the relevant FB users.
  3. Back in 2013 (and apparently starting in 2007), FB also allowed app creators to access data of Friends, in the sense that if Alice is a Friend of Bob, and Alice "agrees" to share the FB data she can see, then Bob's visible data is included. Just how much Friend data was available is unclear (maybe you can find out!), but it probably included Like lists. That's usually enough to guess at political affiliation.
  4. Users running Kogan's app had to click "ok" on the Terms of Service, which spelled out the data sharing.
  5. Kogan got 270,000 people to run his app. Some of them might have been paid.
  6. If each of those people had 500 friends, that's around 500*270,000 = 135 million people total. The Friends lists overlapped, though, so the true number is between 30 and 80 million.
  7. Kogan shared his data with Cambridge Analytica. This may or may not have been legal. Cambridge Analytica is sort of an advertising firm, in that they run political ads, but it's a little different than the usual "ad network, data broker or other advertising or monetization-related service", which is FB's official Naughty List. CA may simply have felt this rule did not apply to them.
  8. Cambridge Analytica supposedly agreed to delete the data some years ago, but did not. Facebook never followed up to check.

So what should have been done differently? Does it come down to one step being a glaring mistake, or was it the result of many incremental mistakes? Was it an honest belief that more shared data would lead to a better user experience? FB has now stopped apps from acquiring data on Friends; after all, they did not consent. (FB stopped this in 2014, unless the Friend explicitly consented, but the data was out.) But note that if the original user consented to share his or her FB view of the world, that would include Friends. If you Friend someone, you know they will have access to all the data you share with Friends.

Apps have always asked for permission to access your data. Most people don't read the ToS.

Facebook once had a vision that every app would be a form of social media, and that this would ultimately lead to benefits for users:

We thought that every app could be social. Your calendar should have your events and your friends birthdays, your maps should know where your friends live, your address book should show their pictures. It was a reasonable vision but it didn't materialize the way we had hoped.

This is from the Boz post below. Facebook's data sharing here was in a sense contrary to its own interests; it was arguably trying to address the interests and convenience of its users.

Mostly, FB does not sell data. It is not their business model. Their model is to gather the data, and let you buy ads according to criteria that the advertiser specifies. The advertiser normally never sees the data. See the "Boz" post below.

In your analysis, you can focus on the specific actions, or on general principles. The goal, however, should be to offer some practical advice for how companies can prevent awkward incidents like this. Was it a misunderstanding? A failure to realize how the data in question might actually end up being used? It's easy to give simplistic answers like "users should have their attorneys read the ToS before each click" or "Facebook should not share data with anyone", but those are impractical.

Note: Facebook has also been accused of giving phone manufacturers access to user data; see nytimes.com/2018/06/04/technology/facebook-device-partnerships.html and nytimes.com/interactive/2018/06/03/technology/facebook-device-partners-users-friends-data.html. But while there might have been inappropriate decisions here, the bottom line is that your device maker can always see whatever it is you are doing. If you don't trust them, you need to get a different phone.

[Side note: vaguely along the lines of "every app could be social", Bill Gates once had a vision that we wouldn't share data, we'd share small programs (maybe scripts?) and these programs could gather and present data just as we'd like. This vision failed because sharing executables leads to rampant viruses.]



Topic option 2: Defamation Policy

Section 230 of the Communications Decency Act has made it nearly impossible to take down user-posted content. Has §230 gone too far?

This law, along with the DMCA, has enabled the rise of third-party-content sites. Some of these are mainstream, such as YouTube and Wikipedia. Some, like Reddit, are known for the freedom provided for users to say what they want. And some, like The Dirty, are simply in the business of encouraging salacious gossip.

The original goal of §230, however, was to protect sites that did family-friendly editing. If you think that §230 still makes sense as a blanket safe-harbor, try to identify the important principles behind §230 and defend them. If you think it has gone too far, outline some sort of alternative, and explain whether your alternative should be based on regulation -- and if so how this would be implemented -- or on voluntary website cooperation. As an example of the latter, note how Google has limited visibility of user posts on YouTube, and made it harder to post anonymously.

Here are a few things to keep in mind.

Here's an article that contains several quotes from former Senator Chris Cox, one of the co-authors of §230, on how he sees it: www.npr.org/sections/alltechconsidered/2018/03/21/591622450/section-230-a-key-legal-shield-for-facebook-google-is-about-to-change.

The FOSTA/SESTA act (see the notes starting at pld.cs.luc.edu/courses/ethics/sum19/mnotes/speech.html#backpage) has now created one limit on §230 protections. Should there be more? Or did FOSTA/SESTA go too far?



Your paper (either topic) will be graded primarily on organization (that is, how you lay out your sequence of paragraphs), focus (that is, whether you stick to the topic), and the nature and completeness of your arguments.

It is, as usual, essential that all material from other sources be enclosed in quotation marks (or set off as a block quote), and preferably with a citation to the original source as well.

Expected length: 3-5 pages (1000+ words)