Computer Ethics Paper 2

Topic option 1: Facebook and Privacy

One recent proposal to address Facebook antitrust concerns is to require that Facebook creates a mechanism for user-data portability. Users could download their Facebook data, upload it to a new social network, and pick up posting where they had left off. A related proposal is for Facebook to create a public API, so that, say, a new social network named Bookface could retrieve, on behalf of its users, the Facebook content of those users, and display it on Bookface. Such users would then gain the benefits of Facebook without ever actually being on it.

These two proposals, user-data portability and public API, are most often touted as solutions to the antitrust issue surrounding Facebook. But the question for this paper is whether either of these proposals would improve the privacy situation at Facebook.

One theory is that, by making it possible to change social networks, users could find the network that most closely met their privacy needs. The contrary theory is that we all pretty much have to pick the social network our friends are on. The contrary-contrary theory might be that an appropriate public API might make it possible to stay on a more private network, and still exchange updates with users of Facebook.

See reuters.com/article/us-tech-antitrust-congress/u-s-senators-want-social-media-users-to-be-able-to-take-their-data-with-them-idUSKBN1X112C. For privacy, would it really help?

That is, would this really create a competitive market for privacy? Or at least a reasonable fraction of a market?

There is also the idea, often touted by Facebook, that such a plan would make it harder for Facebook to continue to meet user needs.

And there is the question of just where the advertising goes, and whether moving your tracking data to a new network will just make the tracking worse as now two networks will be tracking you.

Facebook has lots of privacy issues. Perhaps the most famous, though not quite so closely tied to day-to-day tracking, is that of Cambridge Analytica, which is a case with some hard-to-categorize missteps. Here is what happened (dates updated according to facebook.com/zuck/posts/10104712037900071):

  1. In 2013, Aleksandr Kogan created an app, in the form of an online "quiz" or "survey". Users typically find these in their FB feed, and click on it, but technically it's a separate website. Kogan's app probably allowed Kogan to figure out the user's political affiliation.
  2. The app used the Facebook Login feature, which is way easier than having users create their own accounts, but which also gives the developer access to some of your FB data. And, of course, allows for later serving of ads to the relevant FB users.
  3. Back in 2013 (and apparently starting in 2007), FB also allowed app creators to access data of Friends, in the sense that if Alice is a Friend of Bob, and Alice "agrees" to share the FB data she can see, then Bob's visible data is included. Just how much Friend data was available is unclear (maybe you can find out!), but it may have included Like lists.
  4. Users running Kogan's app had to click "ok" on the Terms of Service, which spelled out the data sharing.
  5. Kogan got 270,000 people to run his app. Some of them might have been paid.
  6. If each of those people had 500 friends, that's around 500*270,000 = 135 million people total. The Friends lists overlapped, though, so the true number is between 30 and 50 million.
  7. Kogan shared his data with Cambridge Analytica. This may or may not have been legal. Cambridge Analytica is sort of an advertising firm, but it's a little different than the usual "ad network, data broker or other advertising or monetization-related service", which is FB's official Naughty List.
  8. Cambridge Analytica supposedly agreed to delete the data some years ago, but did not. Facebook never followed up to check.

It is not clear from this if there was one step which was a glaring mistake, or if there were many incremental mistakes. Facebook clearly had an honest belief that more shared data would lead to a better user experience. Would other social-media providers be any better? Was the real problem the sharing of your Friends' data? But if you share all of your data, doesn't that always include at least some information about your Friends?

(This post by Andrew "Boz" Bosworth, FB VP for being a grown-up, is illuminating: https://www.facebook.com/boz/posts/10104702799873151.)

The EU is strongly pushing data portability for banking. The idea is that, if you can take your banking data with you, banks will have trouble locking in customers and will have to provide competitive online features. Again, that's not exactly the same as privacy. See thenextweb.com/future-of-finance/2018/06/27/openbanking.



Topic option 2: Defamation Policy

Section 230 of the Communications Decency Act has made it nearly impossible to take down user-posted content. Has 230 gone too far?

This law, along with the DMCA, has enabled the rise of third-party-content sites. Some of these are mainstream, such as YouTube and Wikipedia. Some, like Reddit, are known for the freedom provided for users to say what they want. And some, like The Dirty, are simply in the business of encouraging salacious gossip.

The original goal of 230, however, was to protect sites that did family-friendly editing. If you think that 230 still makes sense, try to identify the important principles behind 230 and defend them. If you think it has gone too far, outline some sort of alternative, and explain whether your alternative should be based on regulation -- and if so how this would be implemented -- or on voluntary website cooperation. As an example of the latter, note how Google has limited visibility of user posts on YouTube, and made it harder to post anonymously.

Here are a few things to keep in mind.

Compuserve escaped liability because they did no editing of user-posted content. Should this position continue to receive the highest protection, or should some limited editing of user-posted content be considered the norm?

Should new 230 rules require some element of family-friendly editing, or editing to attain some other socially appropriate goal? If a site engages in some form of editing of user-posted content, under what circumstances should the site escape liability?

Should sites that "encourage" inappropriate posts, by explicit and intentional policy, be held responsible?

Should there be a take-down requirement for disparaging posts? If so, how would you protect sites from celebrities or politicians who wanted nothing negative about them to appear on the Internet?

Here's an article that contains several quotes from former Senator Chris Cox, one of the co-authors of 230, on how he sees it: www.npr.org/sections/alltechconsidered/2018/03/21/591622450/section-230-a-key-legal-shield-for-facebook-google-is-about-to-change.



Your paper (either topic) will be graded primarily on organization (that is, how you lay out your sequence of paragraphs), focus (that is, whether you stick to the topic), and the nature and completeness of your arguments.

It is, as usual, essential that all material from other sources be enclosed in quotation marks (or set off as a block quote), and preferably with a citation to the original source as well.

Expected length: 3-5 pages (1000+ words)