On March 16, 2018, FaceBook suspended the account of Cambridge Analytica, announcing that the latter had broken some of FB's data-usage and data-retention rules.
Just what went wrong here? Was there one step in the chain that was particularly bad? Or was it the whole sequence? Did CA take unfair advantage of a "loophole", or were they just following FB's rules? Or are we to blame, for not reading the FB Terms of Service carefully?
Here is what happened (dates updated according to facebook.com/zuck/posts/10104712037900071):
So what should have been done differently? Does it come down to one step being a glaring mistake, or was it the result of many incremental mistakes? Was it an honest belief that more shared data would lead to a better user experience? FB has now stopped apps from acquiring data on Friends; after all, they did not consent. (FB stopped this in 2014, unless the Friend explicitly consented, but the data was out.) But note that if the original user consented to share his or her FB view of the world, that would include Friends. If you Friend someone, you know they will have access to your data.
Apps have always asked for permission to access your data. Many people don't read the ToS.
Facebook once had a vision that every app would be a form of social media, and that this would ultimately lead to benefits for users:
We thought that every app could be social. Your calendar should have your events and your friends birthdays, your maps should know where your friends live, your address book should show their pictures. It was a reasonable vision but it didn't materialize the way we had hoped.
This is from the Boz post below. Facebook's data sharing here was in a sense contrary to its own interests; it was arguably trying to address the interests and convenience of its users.
Mostly, FB does not sell data. It is not their business model. Their model is to gather the data, and let you buy ads according to criteria that the advertiser specifies. The advertiser normally never sees the data. See the "Boz" post below.
In your analysis, you can focus on the specific actions, or on general principles. The goal, however, should be to offer some practical advice for how companies can prevent awkward incidents like this. Was it a misunderstanding? A failure to realize how the data in question might actually end up being used? It's easy to give simplistic answers like "users should have their attorneys read the ToS before each click" or "Facebook should not share data with anyone", but those are impractical.
Note: Facebook has also been accused of giving phone manufacturers access to user data; see nytimes.com/2018/06/04/technology/facebook-device-partnerships.html and nytimes.com/interactive/2018/06/03/technology/facebook-device-partners-users-friends-data.html. But while there might have been inappropriate decisions here, the bottom line is that your device maker can always see whatever it is you are doing. If you don't trust them, you need to get a different phone.
Section 230 of the Communications Decency Act has made it nearly impossible to take down user-posted content. Has §230 gone too far?
This law, along with the DMCA, has enabled the rise of third-party-content sites. Some of these are mainstream, such as YouTube and Wikipedia. Some, like Reddit, are known for the freedom provided for users to say what they want. And some, like The Dirty, are simply in the business of encouraging salacious gossip.The original goal of §230, however, was to protect sites that did family-friendly editing. If you think that §230 still makes sense, try to identify the important principles behind §230 and defend them. If you think it has gone too far, outline some sort of alternative, and explain whether your alternative should be based on regulation -- and if so how this would be implemented -- or on voluntary website cooperation. As an example of the latter, note how Google has limited visibility of user posts on YouTube, and made it harder to post anonymously.
Should there be a take-down requirement for disparaging posts? If so, how would you protect sites from celebrities or politicians who wanted nothing negative about them to appear on the Internet?
Keep in mind that a site can easily have a million posts a month, per
employee (a number taken from Craigslist). A requirement to monitor all
posts would likely end many of today's sites. Is there some middle ground?
Here's an article that contains several quotes from former Senator Chris Cox, one of the co-authors of §230, on how he sees it: www.npr.org/sections/alltechconsidered/2018/03/21/591622450/section-230-a-key-legal-shield-for-facebook-google-is-about-to-change.
Note that the FOSTA/SESTA act (see the notes starting at pld.cs.luc.edu/courses/ethics/sum18/mnotes/speech.html#backpage) has now created one limit on §230 protections. Should there be more? Or did FOSTA/SESTA go too far?