Computer Ethics, Summer 2010

Week 4, Tuesday (class 10), June 15
Corboy Law Room 323

Midterm was released Sunday June 13, and is due Tuesday June 15 (tonight) by ~midnight.

price discrimination

free speech



Old-fashioned examples of government privacy issues, now kind of quaint:

Matching: Should the government be able to do data mining on their databases? In particular, should they be able to compare DBs for:

Should the following kinds of data be available to the government for large-scale matching?

Government data collection: what does this really have to do with computing? The government has resources to keep records on "suspects" even with pencil and paper.

Government and e-privacy:

What if FACIAL RECOGNITION were to really take off? What would be the consequences? There are all those cameras already.

Most arguments today against facial recognition are based on the idea that there are too many false positives. What if that stopped being the case?

What about camera evidence of running lights or speeding?


Commercial privacy:

E-bay privacy - Ebay has (or used to have) a policy of automatically opening up their records on any buyer/seller to any police department, without subpoena or warrant.

This one is quite remarkable. What do you think? Is this ethical?


Medical Privacy- the elephant in the room?

HIPAA (Health Insurance Portability & Accountability Act) has had a decidedly privacy-positive effect here.




Odlyzko and price discrimination

Andrew Odlyzko's 2003 survey paper is at http://cs.luc.edu/pld/ethics/odlyzko.pdf.

What's the real goal behind all this commercial info? Especially grocery-store discount/club/surveillance cards. There are many possible goals, but here's one that you might not have thought about, in which your privacy can be "violated" even if you are anonymous!

basic supply/demand: one draws curves with price on the horizontal axis, and quantity on the vertical. The supply curve is increasing; the higher the price the greater the supply. The demand curve, on the other hand, decreases with increasing price. However, these are for aggregates.

Now suppose you set price P, and user X has threshold Px.  The demand curve decreases as you raise P because fewer X's are willing to buy. Specifically:

But what you really want is to charge user X the price Px.

Example: Alice & Bob each want a report. Alice will pay €1100, bob will pay €600. You will only do it for €1500. If you charge Alice €1000 and Bob €500, both think they are getting a deal.

But is this FAIR to Alice?

In one sense, absolutely yes.

But what would Alice say when she finds out Bob paid half, for the same thing?

Possible ways to improve the perception of value:

What do computers have to do with this?

Airline pricing: horrendously complicated, to try to maximize revenue for each seat.

Online stores certainly could present different pricing models to different consumers. Does this happen? I have never seen any evidence of it, beyond recognizing different broad classes of consumers. Perhaps it takes the form of discounts for favorite customers, but that's a limited form of price discrimination.

Dell: different prices to business versus education This is the same thing, though the education discount is not nearly as steep now.

Academic journal subscriptions and price discrimination: Libraries pay as much as 10 times for some journals as individuals!

two roundtrip tickets including weekends can be less than one (this example is ~ 2005; all flights are round-trips)   

origin
destination
outbound
return
cost
Minneapolis
Newark
Wed
Fri
$772.50
Minneapolis
Newark
Wed
next week
$226.50
Newark
Minneapolis
Fri
next week
$246.50

If you buy the second and third tickets and throw out the returns, you save almost $300! Airlines have actually claimed that if you don't fly your return leg, they can charge you extra.

The issue is not at all specific to online shopping; it applies to normal stores as well. Sometimes it goes by the name "versioning": selling slightly different versions to different market segments, some at premium prices.


What about grocery stores?

CASPIAN: http://nocards.org

They're against grocery discount cards, also known as club cards or surveillance cards. A big part of Caspian's argument appears to be that the cards don't really save you money; that is, the stores immediately raise prices.

customer-specific pricing: http://nocards.org/overview

One recent customer-specific-pricing strategy: scan your card at a kiosk to get special discounts. nocards.org/news/index.shtml#seg3
Jewel's "avenu" program is exactly this: http://www.jewelosco.com/eCommerceWeb/AvenuAction.do?action=dispLoginPage

One clear goal within the industry is to offer the deepest discounts to those who are less likely to try the product anyway. In many cases, this means offering discounts to shoppers who are known to be PRICE SENSITIVE.

Clearly, the cards let stores know who is brand-sensitive and who is price-sensitive.

Loyal Skippy peanutbutter customers would be unlikely to get Skippy discounts, unless as part of a rewards strategy. They might qualify for Jif discounts.

Classic price discrimination means charging MORE to your regular customers, to whom your product is WORTH more, and giving the coupons to those who are more price-sensitive. Well, maybe the price-sensitive shoppers would get coupons for rice, beans, and peanut butter, while the price-insensitive shoppers would get coupons to imported chocolates, fine wines, and other high-margin items.

"shopper surveillance cards": 1. Allow price discrimination: giving coupons etc to the price-sensitive only. There may be other ways to use this; cf Avenu at Jewel

The idea used to be that you, the consumer, could shop around, compare goods and prices, and make a smart choice. But now the reverse is also true: The vendor looks at its consumer base, gathers information, and decides whether you are worth pleasing, or whether it can profit from your loyalty and habits. -- Joseph Turow, Univ of Pennsylvania

2. segmentation (nocards.com/overview) What about arranging the store to cater to the products purchased by the top 30% of customers (in terms of profitability)? Caspian case: candy aisle was reduced, although it's a good seller, because top 30% preferred baby products. Is this really enough to make the cards worth it to the stores, though?

Using a card anonymously doesn't help here, as long as you keep using the same card!

Using checkout data alone isn't enough, if "the groceries" are bought once a week but high-margin items are bought on smaller trips.

One of the most significant examples of price discrimination is college tuition. The real tuition equals the list price minus your school scholarship. While many scholarships are outside of the control of the school, the reality is that schools charge wealthier families more for the same education.



Privacy wrap-up

Maybe the main point is simply that no one does really care about privacy, at least in the sense of all that data out there about us. One can argue that at least we're consistent: collectively we tend to ignore "rights" issues with software both when it works in our favor (file sharing) and against us (privacy).

One secondary issue with privacy is the difference between "experts" and ordinary people: experts know a lot more about how to find out information on the Internet than everyone else. We'll come back to this "digital divide" issue later, under the topic "hacking", but note that there may be lots of available information out there about you that you simply are not aware of.



Free Speech

The Founding Fathers probably had political speech in mind when drafting the First Amendment:

    Congress shall make no law ... abridging the freedom of speech, or of the press;

Right off the bat note the implicit distinction between "speech" and "the press": blogging wasn't foreseen by the Founding Fathers!

The courts have held that Congress can abridge "offensive" speech. For example:

Baase p 145: information about contraception used to be in the category of restricted speech.

Traditional categories for free speech categorization (Baase, p 145)
Where should commercial websites fit? Where should personal websites (including blogs) fit?

Traditionally (actually, even more so now) the government regulates broadcast TV and radio the most strongly. It is assumed that essentially all content must be appropriate for minors (the practical issue is sexual content; the other things are inappropriate for everybody and there's not as much debate). Cable TV has somewhat greater latitude, but is still subject to FCC regulation.

Note that the list above addresses governmental restrictions on free speech. There are also civil restrictions: if you say something defamatory, you may be sued for libel. Libel is perhaps the biggest issue for "ordinary" people, at least in terms of creating speech: blogs, websites, etc. Libel law creates:
Finally, note that while most laws tend towards a utilitarian justification, the right of free speech is seen as, while not absolute, still pretty fundamental. Specifically, speech may be restricted only if doing so is the least restrictive means of accomplishing the desired end. In this sense, freedom of speech under the US constitution can be seen as a fundamental duty of the government, more akin to deontological reasoning.


sexual material , including pornography (though that is a perjorative term)

Miller v California, Supreme Court 1973: 3-part guideline for determining when something was legally obscene (as opposed to merely "indecent"):

For the internet, COMMUNITY STANDARDS is the problem: what community? This is in fact a huge problem, though it was already a problem with mail-order. (Note that only sexual material has been saddled with community-standards restrictions.)

As the Internet became more popular with "ordinary" users, there was mounting concern that it was not "child-friendly". This led to the Communications Decency Act (CDA) (Baase p 151)


Communications Decency Act

In 1996 Congress passed the Communications Decency Act (CDA) (Baase p 151). It was extremely broad.

From the CDA:

[it is forbidden to be someone who] uses any interactive computer service to display in a manner available to a person under 18 years of age, any comment, request, suggestion, proposal, image, or other communication that, in context, depicts or describes, in terms patently offensive as measured by contemporary community standards...

WHICH COMMUNITY?

On the internet, you cannot tell how old someone is.

Butler v Michigan, 1957: SCOTUS struck down a law making it illegal to sell material (pornography) in Michigan solely because it might be harmful to minors.

CDA was widely viewed as an attempt by Congress to curry favor with a "Concerned Public", but knowing full well it was unlikely to withstand court scrutiny.

It did not. The Supreme Court ruled unanimously in 1997 that the censorship provisions were not ok: they were too vague and did not use the "least-restrictive means" available to achieve the desired goal.

Child Online Protection Act (COPA), 1998: still stuck with "community standards" rule. The law also authorized the creation of a commission; this was the agency that later wanted some of google's query data. The bulk of COPA was struck down.

CIPA: Child Internet Protection Act, 2000 (Baase, p 158) Schools that want federal funding have to install filters.

Filters are sort of a joke, though they've gotten better. However, they CANNOT do what they claim. They pretty much have to block translation sites and all "personal" sites, as those can be used for redirection. See peacefire.org. And stupidcensorship.com.

SCOTUS upheld CIPA in 2003.

The Chicago Public Library gave up on filters, but did install screen covers that make it very hard for someone to see what's on your screen. This both protects patron privacy AND protects library staff from what might otherwise be a "hostile work environment".

Baase has more on the library situation, p 157



Batzel v Cremers

One piece of the CDA survived: §230:

No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider. [Wikipedia]

Why is this there?

Note that there is no limit of §230 to any particular area of law, eg libel. (Actually, there are limits if the issue is copyright law, or criminal law.)

Note also that §230 addresses "publisher" liability and "author" liability. Another form, not exempted, is "distributor" liability.

The actual law is here: http://www.law.cornell.edu/uscode/47/usc_sec_47_00000230----000-.html. Note in particular the exemption sections (e)(1) and (e)(2). Note also that the law is titled


History of this as applies to protecting minors from offensive material

Cubby v CompuServe: 1991

District court only, New York State (Does anyone remember compuserve?) Giant pre-Internet BBS available to paid subscribers. The "rumorville" section, part of the Journalism Forum, was run by an independent company, Don Fitzpatrick Associates. Their contract guaranteed DFA had "total responsibility for the contents". Rumorville was in essence an online newspaper; essentially it was an expanded gossip column about the journalism industry. I have no idea who paid whom for the right to be present on CompuServe.

1990: Cubby Inc and Robert Blanchard plan to start a competing online product, Skuttlebut. This is disparaged in Rumorville. Cubby et al sue DFA & Compuserve for libel.

Compuserve argued they were only a distributor; they escaped liability. In fact, they escaped with Summary Judgement! The court ruled that they had no control at all over content. They are like a bookstore, or a distributor.

While CompuServe may decline to carry a given publication altogether, in reality, once it does decide to carry a publication, it will have little or no editorial control over that publication's contents. This is especially so when CompuServe carries the publication as part of a forum that is managed by a company unrelated to CompuServe.

CompuServe has no more editorial control over such a publication than does a public library, book store, or newsstand, and it would be no more feasible for CompuServe to examine every publication it carries for potentially defamatory statements than it would be for any other distributor to do so.

It was and is generally accepted that distributors have no liability for content (unless it can be proven that they encouraged the content).

(we'll come back to "distributor liability" later.)


Stratton Oakmont v Prodigy: New York state court, 1995. On a financial matters forum called "Money Talk," a Prodigy user (never identified) posted about Daniel Porush, the president of Stratton Oakmont, a financial services company. The remarks called Porush a "soon to be proven criminal" and that Stratton Oakmont was a "cult of brokers who either lie for a living or get fired"

Prodigy claimed the Compuserve defense in their motion for summary judgement.

Prodigy lost, because they promised to monitor for bad behavior on the board. At the very least, they CLAIMED to the public that they reserved the right to edit or remove messages. This was in fact part of Prodigy's family-oriented marketing. Prodigy was trying to do "family values" editing (including the deletion of profanity), and it cost them.

In legal terms, Prodigy was held to "publisher liability" rather than the weaker "distributor liability" because they CLAIMED to exercise editorial judgement.

Prodigy did have some internal confusion about whether they were for the "free expression of ideas" or were "family safe"

Prodigy's policy was to ban individual attacks, but not group attacks; anti-semitic rants did appear and were not taken down.

After Prodigy lost their motion for summary judgement, the case was settled; Prodigy issued a public apology. In Wall Street versus America by Gary Weiss, the claim is made that the settlement did not involve the exchange of money. See http://books.google.com/books?id=iOhGkYqaEdwC&pg=PA213&lpg=PA213&dq=wall+street+versus+america+porush&source=b...t=result, page 215: "No money changed hands. No money had to change hands."

Weiss also points out that four years later

... Porush and his partners were all carted off to federal prison. In 1999, Porush and other Stratton execs pleaded guilty to securities fraud and money laundering for manipulating a bunch of Stratton IPOs... Stratton really was a den of thieves. Porush really was a criminal. [italics in original - pld]


Enter the CDA. §230 was intended to encourage family-values editing, because after the Stratton Oakmont case most providers were afraid to step in.

Whether this was specifically to encourage providers to remove profanity & obscenity, the nominal targets of the CDA, or whether it was just a compensatory free-speech-positive clause in an overall free-speech- very-negative law is not clear.

Most of Congress apparently did not expect the CDA to withstand judicial scrutiny.

Congressional documents suggest fixing Stratton Oakmont precedent was the primary purpose of §230. However, arguably the reason for fixing Stratton Oakmont was to protect ISPs and websites that did try to provide a "family-friendly" environment.



Batzel v Cremers summary

Robert Smith was a handyman who worked for Ellen Batzel at her North Carolina home, doing repairs to house and vehicles, in 1999. Batzel's house was filled with large paintings in old frames that looked European.

Smith claims:

  1. Batzel told him that she was "the granddaughter of one of Hitler's right-hand men"
  2. He overheard Batzel tell someone that she was related to Heinrich Himmler (or else this was part of conversation #1)
  3. He was told by Batzel the paintings were "inherited"

Smith developed the theory that the paintings were artwork stolen by the Nazis and inherited by Batzel.

Smith had a dispute with Batzel [either about payments for work, or about Batzel's refusal to use her Hollywood contacts to help Smith sell his movie script]. It is not clear to what extent this dispute influenced Smith's artwork theory.

Smith sent his allegations about Batzel in an email to Ton Cremers, who ran a stolen-art mailing list. Smith found Cremers  through a search engine. This is still 1999.

Smith claimed in his email that some of Batzel's paintings were likely stolen by the Nazis. (p 8432 of the decision, Absolute Page 5)

Smith sent the email to securma@museum-security.org

Cremers ran a moderated listserv specializing in this. He included Smith's email in his next release. Cremers exercised editorial control both by deciding inclusion and also by editing the text as necessary.

He included a note that the FBI had been notified.

Normal address for Cremer's list was: securma@xs4all.nl

Smith's emailed reply to someone when he found out he was on the list:

I [was] trying to figure out how in blazes I could have posted me [sic] email to [the Network] bulletin board. I came into MSN through the back door, directed by a search engine, and never got the big picture. I don't remember reading anything about a message board either so I am a bit confused over how it could happen. Every message board to which I have ever subscribed required application, a password, and/or registration, and the instructions explained this is necessary to keep out the advertisers, cranks, and bumbling idiots like me.

Some months later, Batzel found out and contacted Cremers, who contacted Smith, who continued to claim that what he said was true. However, he did say that he had not intended his message for posting.

On hearing that, Cremers did apologize to Smith.

Batzel disputed having any familial relationship to any Nazis, and stated the artwork was not inherited.

Batzel sued in California state court:

Cremers filed in Federal District Court for:

He lost on all three counts. (Should he have? We'll return to the jurisdiction one later. Jurisdiction is a huge issue in libel law!). The district court judge ruled that Cremers was not an ISP and so could not claim §230 immunity.

Cremers then appealed the federal issues (anti-SLAPP, jurisdiction, §230) to the Ninth Circuit, which simply ruled that §230 meant Batzel had no case. (Well, there was one factual determination left for the District Court, which then ruled on that point in Cremers' favor.)

 This was the §230 case that set the (famous) precedent. This is a major case in which both Congress and the courts purport to "get it" about the internet. But note that there was a steady evolution:

  1. the law Congress intended
  2. the law Congress actually wrote down
  3. how the Ninth Circuit interpreted the law


IS Cremers like an ISP here? The fact that he is editing the list he sends out sure gives him an active role, and yet it was Prodigy's active-editing role that the CDA §230 was arguably intended to protect.

Why does Communications Decency Act have such a strong free-speech component? Generally free speech is something the indecent are in favor of.

The appellate case was heard by the Ninth Circuit (Federal Appellate court in CA, other western states); a copy is at http://cs.luc.edu/pld/ethics/BatzelvCremers.pdf.  (Page numbers in the sequal are as_printed/relative).

Judge Berzon:

[Opening (8431/4)] There is no reason inherent in the technological features of cyberspace why First Amendment and defamation law should apply differently in cyberspace than in the brick and mortar world. Congress, however, has chosen for policy reasons to immunize from liability for defamatory or obscene speech "providers and users of interactive computer services" when the defamatory or obscene material is "provided" by someone else.

Note the up-front recognition that this is due to Congress.

Section 230 was first offered as an amendment by Representatives Christopher Cox (R-Cal.) and Ron Wyden (D-Ore.). (8442/15)

Congress made this legislative choice for two primary reasons. First, Congress wanted to encourage the unfettered and unregulated development of free speech on the Internet, and to promote the development of e-commerce. (8443/16) ...

(Top of 8445/18) The second reason for enacting § 230(c) was to encourage interactive computer services and users of such services to self-police the Internet for obscenity and other offensive material

[extensive references to congressional record]

(8447/20): In particular, Congress adopted § 230(c) to overrule the decision of a New York state court in Stratton Oakmont, 1995

Regarding question of why a pro-free-speech clause was included in an anti-free-speech law (or, more precisely, addressing the suggestion that §230 shouldn't be interpreted as broadly pro-free-speech simply because the overall law was anti-free-speech):

(8445/18, end of 1st paragraph): Tension within statutes is often not a defect but an indication that the legislature was doing its job.

8448/21, start of section 2. To benefit from § 230(c) immunity, Cremers must first demonstrate that his Network website and listserv qualify as "provider[s] or user[s] of an interactive computer service."

The District court limited this to ISPs. The Circuit court argued that (a) Cremers was a provider of a computer service, and (b) that didn't matter because he was unquestionably a USER.

8450/23, at [12] Critically, however, § 230 limits immunity to information "provided by another information content provider."

Here's one question: was Smith "another content provider"? You can link and host all you want, provided others have created the material for online use. But if Smith wasn't a content provider, then Cremers becomes the originator.

The other question is whether Cremers was in fact partly the "provider", by virtue of his editing. Note, though, that the whole point of §230 is to allow (family-friendly) editing.

Answer to first question:

8450/23, 3rd paragraph: Obviously, Cremers did not create Smith's e-mail. Smith composed the e-mail entirely on his own. Nor do Cremers's minor alterations of Smith's e-mail prior to its posting or his choice to publish the e-mail (while rejecting other e-mails for inclusion in the listserv) rise to the level of "development."

More generally, the idea here is that there is simply no way to extend immunity to Stratton-Oakmont-type editing, or to removing profanity, while failing to extend immunity "all the way".

Is that actually true? [class discussion]

[end of class]

The Court considers some other partial interpretations of §230, but finds they are unworkable.

Second point:

8584/27, 3rd paragraph Smith's confusion, even if legitimate, does not matter, Cremers maintains, because the §230(c)(1) immunity should be available simply because Smith was the author of the e-mail, without more. We disagree. Under Cremers's broad interpretation of §230(c), users and providers of interactive computer services could with impunity intentionally post material they knew was never meant to be put on the Internet. At the same time, the creator or developer of the information presumably could not be held liable for unforeseeable publication of his material to huge numbers of people with whom he had no intention to communicate. The result would be nearly limitless immunity for speech never meant to be broadcast over the Internet. [emphasis added]

The case was sent back to district court to determine this point (which it did, in Cremer's favor).

8457/30, at [19] We therefore ... remand to the district court for further proceedings to develop the facts under this newly announced standard and to evaluate what Cremers should have reasonably concluded at the time he received Smith's e-mail. If Cremers should have reasonably concluded,  for example, that because Smith's e-mail arrived via a different e-mail address it was not provided to him for possible posting on the listserv, then Cremers cannot take advantage of the §230(c) immunities.