Computer Ethics, Summer 2011

LT 412;  6:00-9:00 TTh, June 14, Class 7

Read Baase, chapter 3, on freedom of speech

gmail
RFID
SSN & Government
Price Discrimination
Free Speech
CDA §230, Batzel v Cremers
criminal libel
google convictions in Italy
regulated speech






A good TIME Magazine article about online tracking. This article has more examples of wrong or misleading information in advertiser/tracker databases. Note that some tracking is "soft" (tied only to our computer, and based on browsing history) while some is "hard" (specific business records involving our name/address or ssn or both).



New Tennessee Netflix-sharing law

Tennessee has banned the sharing of passwords to Netflix and related content-streaming sites.

http://blogs.findlaw.com/blotter/2011/06/tenn-law-netflix-password-sharing-is-a-crime.html
http://www.businessinsider.com/netflix-login-sharing-tennessee-2011-6

It seems that Tennessee is worried about the impact of Netflix password-sharing beyond Netflix itself. Do we need such a law? What kind of sharing does the law envision? What kind might it prevent? Netflix limits you to four simultaneous downloads.




gmail

All gmail is read at google. Just not necessarily by people. Does this matter?

Note that gmail has access to the full text of your email itself. This means google knows more about you than any regular web advertiser (except those doing full keystroke capture, which I'm still not sure actually occurs).

What if Bradford Councilman, of the email-scanning scheme, had had automated software read the email, and this software then updated Councilman's book-pricing lists? Is this different from what gmail does, or the same?

What if google searched gmail for inside stock tips, and then invested?

What could google do with the information it learns about you? What could they do beyond learning of your areas of interest?

What could the government do, if they had access to any of it?

Once Upon A Time, some people laced their emails with words like "bomb" and "terrorist", intended as a troll for the NSA. If you're doing that today you're most likely trolling gmail instead of the NSA. Try lacing your google email with words related to a single hobby with substantial commercial presence (eg tennis), and see what ads you get. (Perhaps the most interesting test would be to choose a socially stigmatized hobby.)




What if your ISP examined your email? Would it make a difference if the reason was:



RFID

Original reading: Simson Garfinkel, Adopting Fair Information Practices to Low Cost RFID Systems.

Overall survey of active v passive rfid tags. Why they might remain attached to purchased items. RFID tags in identification cards

Differences between RFID and bar codes. In one sense, both types work by being "illuminated" by a source of electromagnetic radiation. In practice, most ordinary materials are not opaque to RFID frequencies, and more information can be stored.

creeping incursions: when do we take notice? Is there a feeling that this "only applies to stores"? Are there any immediate social consequences? Is there a technological solution?

How do we respond to real threats to our privacy? People care about SSNs now; why is that?

Are RFID tags a huge invasion of privacy, touching on our "real personal space", or are they the next PC/cellphone/voip/calculator that will revolutionize daily life for the better by allowing computers to interact with our physical world?

Imagine if all your clothing displays where you bought it: "Hello. My underwear comes from Wal*Mart"
(Well, actually, no; RFID tags don't take well to laundering.)

RFID tags on expensive goods, signaling that I have them: iPods, cameras, electronics

Loyola RFID cards

RFID v barcodes: unique id for each item, not each type readable remotely without your consent

"Kill" function

Active and passive tags

Are there ways to make us feel better about RFID??

Garfinkel's proposed RFID Bill of Rights:

Users of RFID systems and purchasers of products containing RFID tags have:

  1. The right to know if a product contains an RFID tag.
  2. The right to have embedded RFID tags removed, deactivated, or destroyed when a product is purchased.
  3. The right to first class RFID alternatives: consumers should not lose other rights (e.g. the right to return a product or to travel on a particular road) if they decide to opt-out of RIFD or exercise an RFID tag’s “kill” feature.
  4. The right to know what information is stored inside their RFID tags. If this information is incorrect, there must be a means to correct or amend it.
  5. The right to know when, where and why an RFID tag is being read.

What about #3 and I-Pass? And cellphones?

Serious applications:

Technological elite: those with access to simple RFID readers? Sort of like those with technical understanding of how networks work?

2003 boycott against Benetton over RFID-tagged clothing: see boycottbenetton.com: "I'd rather go naked" (who, btw, do you think is maintaining their site? This page is getting old!)

Some specific reasons for Benetton's actions:

Is the real issue a perception of control? See Guenther & Spiekermann Sept 2005 CACM article, p 73 [not assigned as reading]. The authors developed two models for control of RFID information on tagged consumer goods:

Bottom line: Guenther & Spiekerman found that changing the privacy model for RFID did not really change user concerns.

Is there a "killer app" for RFID? Smart refrigerators don't seem to be it.

I-Pass is maybe a candidate, despite privacy issues (police-related) Speedpass (wave-and-go credit card) is another example. And cell phones do allow us to be tracked and do function as RFID devices. But these are all "high-power" RFID, not passive tags.

What about existing anti-theft tags? They are subject to some of the same misuses.

Papers: Bruce Eckfeldt: focuses on benefits RFID can bring. Airplane luggage, security [?], casinos, museum visitors

Does RFID really matter? When would rfid matter?

RFID:

tracking people within a fixed zone, eg tracking within a store:

Entry/exit tracking

profiling people
cell-phone tracking: when can this be done?

Are there implicit inducements to waive privacy? If disabling the RFID tag means having to take products to the "kill" counter and wait in line, or losing warranty/return privileges, is that really a form of pressure to get us to leave the tag alone?

RFID shopping carts in stores: scan your card and you get targeted ads as you shop. From nocards.org:

"The other way it's useful is that if I have your shopping habits and I know in a category, for instance, that you're a loyal customer of Coca Cola, let's say, then basically, when I advertise Coca Cola to you the discount's going to be different than if I know that you're a ... somebody that's price sensitive." Fujitsu representative Vernon Slack explaining how his company's "smart cart" operates.

RFID MTA hack? We'll come to this later, under "hacking". But see http://cs.luc.edu/pld/ethics/charlie_defcon.pdf (especially pages 41, 49, and 51) and (more mundane) http://cs.luc.edu/pld/ethics/mifare-classic.pdf.

RFID and card-skimming

Card-skimming is the practice of reading information on magnetic-stripe cards (usually ATM cards) by attaching a secondary reader over the primary card slot. Readers can be purchased (illegally) to blend in with almost any model of ATM. Together with a hidden camera to capture your PIN number, these systems can be used to max out the withdrawals of dozens or even hundreds of accounts each day.

At first sight, RFID seems like it would make this situation even worse: your card (but not PIN) can be skimmed while in your wallet. However, RFID can easily be coupled with "smart card" technology: having a chip on the card that can do public-key encryption and digital signing. (Interfacing such a chip with magnetic-stripe readers is tricky.) With such a smart card, and appropriate challenge-response infrastructure, skimming is useless.

Passports

See also http://getyouhome.gov

US passports have had RFID chips embedded for some years now. In the article at http://news.cnet.com/New-RFID-travel-cards-could-pose-privacy-threat/2100-1028_3-6062574.html, it is stated that

Homeland Security has said, in a government procurement notice posted in September [2005?], that "read ranges shall extend to a minimum of 25 feet" in RFID-equipped identification cards used for border crossings. For people crossing on a bus, the proposal says, "the solution must sense up to 55 tokens."

The notice, unearthed by an anti-RFID advocacy group, also specifies: "The government requires that IDs be read under circumstances that include the device being carried in a pocket, purse, wallet, in traveler's clothes or elsewhere on the person of the traveler....The traveler should not have to do anything to prepare the device to be read, or to present the device for reading--i.e., passive and automatic use."

The article also talks, though, about how passports (as opposed to the PASS cards usable for returning from Canada or Mexico) now have RFID-resistant "antiskimming material" in the front (and back?) cover, making the chip difficult to read when the passport is closed.

Currently, passport covers do provide moderately effective shielding. Furthermore, the data stream is encrypted, and cannot be read without the possession of appropriate keys (although it may still identify the passport bearer as a US citizen). An article in the December 2009 Communications of the ACM by Ramos et al suggested that the most effective attack would be to:

The actual information on the passport consist of your name, sex, date of birth, place of birth, and photograph. Note that to be in the vicinity of the customs counter, you generally have to have a paid international airplane ticket (though eavesdropping at highway crossings might also be possible), and forged blank passport books are also relatively expensive. In other words, this is not an easy scam to pull off. Risks to US citizens abroad seem pretty minimal.




Tracking: Printer tracking dots; word .doc format


SSN

see http://cpsr.org/issues/privacy/ssn-faq/

Privacy Act of 1974: govt entities can't require its use unless:

SSN and:

There had been a trend against using the SSN for student records; some students complained that no federal law authorized its collection for student records and therefore state schools could not require it. Alas, while this idea was gaining traction Congress introduced the Hope education tax credits and now it is required that students give their SSN to colleges. Even if they don't intend to claim the credit.

What exactly is identity theft?

National Identity Card: What are the real issues? tracking? matching between databases? Identity "theft"?

Starting on page 85, there's a good section in Baase on stolen data; see especially the table of incidents on page 87. What should be done about this? Should we focus on:

You have to give your SSN when applying for a marriage license, professional license, "recreational" license, and some others. Why should this be? For the answer, see http://www4.law.cornell.edu/uscode/42/usc_sec_42_00000666----000-.html. This is a pretty good example of a tradeoff between privacy and some other societal goal, with the latter winning out.


Old-fashioned examples of government privacy issues, now kind of quaint:

Matching: Should the government be able to do data mining on their databases? In particular, should they be able to compare DBs for:

Should the following kinds of data be available to the government for large-scale matching?

Government data collection: what does this really have to do with computing? The government has resources to keep records on "suspects" even with pencil and paper.

Government and e-privacy:

What if FACIAL RECOGNITION were to really take off? What would be the consequences? There are all those cameras already.

Most arguments today against facial recognition are based on the idea that there are too many false positives. What if that stopped being the case?

What about camera evidence of running lights or speeding?


Commercial privacy:

E-bay privacy - Ebay has (or used to have) a policy of automatically opening up their records on any buyer/seller to any police department, without subpoena or warrant.

This one is quite remarkable. What do you think? Is this ethical?


Medical Privacy- the elephant in the room?

HIPAA (Health Insurance Portability & Accountability Act) has had a decidedly privacy-positive effect here.





Odlyzko and price discrimination

Andrew Odlyzko's 2003 survey paper is at http://cs.luc.edu/pld/ethics/odlyzko.pdf.

What's the real goal behind the collection of all this commercial information? Especially grocery-store discount/club/surveillance cards. There are many possible goals, but here's one that you might not have thought about, in which your privacy can be "violated" even if you are anonymous!

basic supply/demand: one draws curves with price on the horizontal axis, and quantity on the vertical. The supply curve is increasing; the higher the price the greater the supply. The demand curve, on the other hand, decreases with increasing price. However, these are for aggregates.

Now suppose you set price P, and user X has threshold Px.  The demand curve decreases as you raise P because fewer X's are willing to buy. Specifically:

But what you really want is to charge user X the price Px.

Example: Alice & Bob each want a report. Alice will pay €1100, bob will pay €600. You will only do it for €1500. If you charge Alice €1000 and Bob €500, both think they are getting a deal.

But is this FAIR to Alice?

In one sense, absolutely yes.

But what would Alice say when she finds out Bob paid half, for the same thing?

Possible ways to improve the perception of value:

What do computers have to do with this?

Airline pricing: horrendously complicated, to try to maximize revenue for each seat.

Online stores certainly could present different pricing models to different consumers. Does this happen? I have never seen any evidence of it, beyond recognizing different broad classes of consumers. Perhaps it takes the form of discounts for favorite customers, but that's a limited form of price discrimination.

Dell: different prices to business versus education This is the same thing, though the education discount is not nearly as steep now.

Academic journal subscriptions and price discrimination: Libraries pay as much as 10 times for some journals as individuals!

two roundtrip tickets including weekends can be less than one (this example is ~ 2005; all flights are round-trips)   

origin
destination
outbound
return
cost
Minneapolis
Newark
Wed
Fri
$772.50
Minneapolis
Newark
Wed
next week
$226.50
Newark
Minneapolis
Fri
next week
$246.50

If you buy the second and third tickets and throw out the returns, you save almost $300! Airlines have actually claimed that if you don't fly your return leg, they can charge you extra.

The issue is not at all specific to online shopping; it applies to normal stores as well. Sometimes it goes by the name "versioning": selling slightly different versions to different market segments, some at premium prices.


What about grocery stores?

CASPIAN: http://nocards.org

They're against grocery discount cards, also known as club cards or surveillance cards. A big part of Caspian's argument appears to be that the cards don't really save you money; that is, the stores immediately raise prices.

customer-specific pricing: http://nocards.org/overview

One recent customer-specific-pricing strategy: scan your card at a kiosk to get special discounts. nocards.org/news/index.shtml#seg3
Jewel's "avenu" program is exactly this: http://www.jewelosco.com/savings/avenu.jsp.

One clear goal within the industry is to offer the deepest discounts to those who are less likely to try the product anyway. In many cases, this means offering discounts to shoppers who are known to be price-sensitive, and not to others.

Clearly, the cards let stores know who is brand-sensitive and who is price-sensitive.

Loyal Skippy peanutbutter customers would be unlikely to get Skippy discounts, unless as part of a rewards strategy. They might qualify for Jif discounts.

Classic price discrimination means charging MORE to your regular customers, to whom your product is WORTH more, and giving the coupons to those who are more price-sensitive. Well, maybe the price-sensitive shoppers would get coupons for rice, beans, and peanut butter, while the price-insensitive shoppers would get coupons to imported chocolates, fine wines, and other high-margin items.

"shopper surveillance cards": 1. Allow price discrimination: giving coupons etc to the price-sensitive only. There may be other ways to use this; cf Avenu at Jewel

The idea used to be that you, the consumer, could shop around, compare goods and prices, and make a smart choice. But now the reverse is also true: The vendor looks at its consumer base, gathers information, and decides whether you are worth pleasing, or whether it can profit from your loyalty and habits. -- Joseph Turow, Univ of Pennsylvania

2. segmentation (nocards.com/overview) What about arranging the store to cater to the products purchased by the top 30% of customers (in terms of profitability)? Caspian case: candy aisle was reduced, although it's a good seller, because top 30% preferred baby products. Is this really enough to make the cards worth it to the stores, though?

Using a card anonymously doesn't help here, as long as you keep using the same card!

Using checkout data alone isn't enough, if "the groceries" are bought once a week but high-margin items are bought on smaller trips.

One of the most significant examples of price discrimination is college tuition. The real tuition equals the list price minus your school scholarship. While many scholarships are outside of the control of the school, the reality is that schools charge wealthier families more for the same education.



Privacy wrap-up

Maybe the main point is simply that no one does really care about privacy, at least in the sense of all that data out there about us. One can argue that at least we're consistent: collectively we tend to ignore "rights" issues with software both when it works in our favor (file sharing) and against us (privacy).

One secondary issue with privacy is the difference between "experts" and ordinary people: experts know a lot more about how to find out information on the Internet than everyone else. We'll come back to this "digital divide" issue later, under the topic "hacking", but note that there may be lots of available information out there about you that you simply are not aware of.




Free Speech

The Founding Fathers probably had political speech in mind when drafting the First Amendment:

    Congress shall make no law ... abridging the freedom of speech, or of the press;

Right off the bat note the implicit distinction between "speech" and "the press": blogging wasn't foreseen by the Founding Fathers!

The courts have held that Congress can abridge "offensive" speech. For example:

Baase p 145: information about contraception used to be in the category of restricted speech.

Traditional categories for free speech categorization (Baase, p 145)
Where should commercial websites fit? Where should personal websites (including blogs) fit?

Traditionally (actually, even more so now) the government regulates broadcast TV and radio the most strongly. It is assumed that essentially all content must be appropriate for minors (the practical issue is sexual content; the other things are inappropriate for everybody and there's not as much debate . Cable TV has somewhat greater latitude, but is still subject to FCC regulation.

(The government has few if any rules about violence on TV, though laws are occasionally introduced into Congress. The feds did bring the V-chip to every US television; these are almost universally unused by consumers. Broadcasters have their own rules about violence, however. )

Note that the list above addresses governmental restrictions on free speech. There are also civil restrictions: if you say something defamatory, you may be sued for libel. Libel is perhaps the biggest issue for "ordinary" people, at least in terms of creating speech: blogs, websites, etc. Libel law creates:
Finally, note that while most laws tend towards a utilitarian justification, the right of free speech is seen as, while not absolute, still pretty fundamental. Specifically, speech may be restricted only if doing so is the least restrictive means of accomplishing the desired end. In this sense, freedom of speech under the US constitution can be seen as a fundamental duty of the government, more akin to deontological reasoning.


sexual material , including pornography (though that is a pejorative term)

Miller v California, Supreme Court 1973: 3-part guideline for determining when something was legally obscene (as opposed to merely "indecent"):

For the internet, community standards is the problem: what community? This is in fact a huge problem, though it was already a problem with mail-order.

As the Internet became more popular with "ordinary" users, there was mounting concern that it was not "child-friendly". This led to the Communications Decency Act (CDA) (Baase p 151)


Communications Decency Act

In 1996 Congress passed the Communications Decency Act (CDA) (Baase p 151). It was extremely broad.

From the CDA:

[it is forbidden to be someone who] uses any interactive computer service to display in a manner available to a person under 18 years of age, any comment, request, suggestion, proposal, image, or other communication that, in context, depicts or describes, in terms patently offensive as measured by contemporary community standards...

WHICH COMMUNITY?

On the internet, you cannot tell how old someone is.

Butler v Michigan, 1957: SCOTUS struck down a law making it illegal to sell material (pornography) in Michigan solely because it might be harmful to minors.

CDA was widely viewed as an attempt by Congress to curry favor with a "Concerned Public", but knowing full well it was unlikely to withstand court scrutiny.

It did not. The Supreme Court ruled unanimously in 1997 that the censorship provisions were not ok: they were too vague and did not use the "least-restrictive means" available to achieve the desired goal.

Child Online Protection Act (COPA), 1998: still stuck with "community standards" rule. The law also authorized the creation of a commission; this was the agency that later wanted some of google's query data. The bulk of COPA was struck down.

CIPA: Child Internet Protection Act, 2000 (Baase, p 158) Schools that want federal funding have to install filters.

Filters are sort of a joke, though they've gotten better. However, they CANNOT do what they claim. They pretty much have to block translation sites and all "personal" sites, as those can be used for redirection; note that many sites of prospective congressional candidates are of this type. See peacefire.org. And stupidcensorship.com.

SCOTUS upheld CIPA in 2003.

The Chicago Public Library gave up on filters, but did install screen covers that make it very hard for someone to see what's on your screen. This both protects patron privacy AND protects library staff from what might otherwise be a "hostile work environment".

Baase has more on the library situation, p 157



Batzel v Cremers

One piece of the CDA survived: §230:

No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider. [Wikipedia]

Why is this there?

Note that there is no limit of §230 to any particular area of law, eg libel. (Actually, there are limits if the issue is copyright law, or criminal law.)

Note also that §230 addresses "publisher" liability and "author" liability. Another form, not exempted, is "distributor" liability.

The actual law is here: http://www.law.cornell.edu/uscode/47/usc_sec_47_00000230----000-.html. Note in particular the exemption sections (e)(1) and (e)(2). Note also that section 230  is titled "Protection for private blocking and screening of offensive material".


History of this as applies to protecting minors from offensive material

Cubby v CompuServe: 1991

District court only, New York State (Does anyone remember compuserve?) Giant pre-Internet BBS available to paid subscribers. The "rumorville" section, part of the Journalism Forum, was run by an independent company, Don Fitzpatrick Associates. Their contract guaranteed DFA had "total responsibility for the contents". Rumorville was in essence an online newspaper; essentially it was an expanded gossip column about the journalism industry. I have no idea who paid whom for the right to be present on CompuServe.

1990: Cubby Inc and Robert Blanchard plan to start a competing online product, Skuttlebut. This is disparaged in Rumorville. Cubby et al sue DFA & Compuserve for libel.

Compuserve argued they were only a distributor; they escaped liability. In fact, they escaped with Summary Judgement! The court ruled that they had no control at all over content. They are like a bookstore, or a distributor.

While CompuServe may decline to carry a given publication altogether, in reality, once it does decide to carry a publication, it will have little or no editorial control over that publication's contents. This is especially so when CompuServe carries the publication as part of a forum that is managed by a company unrelated to CompuServe.

CompuServe has no more editorial control over such a publication than does a public library, book store, or newsstand, and it would be no more feasible for CompuServe to examine every publication it carries for potentially defamatory statements than it would be for any other distributor to do so.

It was and is generally accepted that distributors have no liability for content (unless it can be proven that they encouraged the content).

(we'll come back to "distributor liability" later.)


Stratton Oakmont v Prodigy: New York state court, 1995. On a financial matters forum called "Money Talk," a Prodigy user (never identified) posted about Daniel Porush, the president of Stratton Oakmont, a financial services company. The remarks called Porush a "soon to be proven criminal" and that Stratton Oakmont was a "cult of brokers who either lie for a living or get fired"

Prodigy claimed the Compuserve defense in their motion for summary judgement.

Prodigy lost, because they promised to monitor for bad behavior on the board. At the very least, they CLAIMED to the public that they reserved the right to edit or remove messages. This was in fact part of Prodigy's family-oriented marketing. Prodigy was trying to do "family values" editing (including the deletion of profanity), and it cost them.

In legal terms, Prodigy was held to "publisher liability" rather than the weaker "distributor liability" because they CLAIMED to exercise editorial judgement.

Prodigy did have some internal confusion about whether they were for the "free expression of ideas" or were "family safe"

Prodigy's policy was to ban individual attacks, but not group attacks; anti-semitic rants did appear and were not taken down.

After Prodigy lost their motion for summary judgement, the case was settled; Prodigy issued a public apology. In Wall Street versus America by Gary Weiss, the claim is made that the settlement did not involve the exchange of money. See http://books.google.com/books?id=iOhGkYqaEdwC&pg=PA213&lpg=PA213&dq=wall+street+versus+america+porush&source=b...t=result, page 215: "No money changed hands. No money had to change hands."

Weiss also points out that four years later

... Porush and his partners were all carted off to federal prison. In 1999, Porush and other Stratton execs pleaded guilty to securities fraud and money laundering for manipulating a bunch of Stratton IPOs... Stratton really was a den of thieves. Porush really was a criminal. [italics in original - pld]


Enter the CDA. §230 was intended to encourage family-values editing, because after the Stratton Oakmont case most providers were afraid to step in.

Whether this was specifically to encourage providers to remove profanity & obscenity, the nominal targets of the CDA, or whether it was just a compensatory free-speech-positive clause in an overall free-speech- very-negative law is not clear.

Most of Congress apparently did not expect the CDA to withstand judicial scrutiny.

Congressional documents suggest fixing Stratton Oakmont precedent was the primary purpose of §230. However, arguably the reason for fixing Stratton Oakmont was to protect ISPs and websites that did try to provide a "family-friendly" environment.



Batzel v Cremers summary

Robert Smith was a handyman who worked for Ellen Batzel at her North Carolina home, doing repairs to house and vehicles, in 1999. Batzel's house was filled with large paintings in old frames that looked European.

Smith claims:

  1. Batzel told him that she was "the granddaughter of one of Hitler's right-hand men"
  2. He overheard Batzel tell someone that she was related to Heinrich Himmler (or else this was part of conversation #1)
  3. He was told by Batzel the paintings were "inherited"

Smith developed the theory that the paintings were artwork stolen by the Nazis and inherited by Batzel.

Smith had a dispute with Batzel [either about payments for work, or about Batzel's refusal to use her Hollywood contacts to help Smith sell his movie script]. It is not clear to what extent this dispute influenced Smith's artwork theory.

Smith sent his allegations about Batzel in an email to Ton Cremers, who ran a stolen-art mailing list. Smith found Cremers  through a search engine. This is still 1999.

Smith claimed in his email that some of Batzel's paintings were likely stolen by the Nazis. (p 8432 of the decision, Absolute Page 5)

Smith sent the email to securma@museum-security.org

Cremers ran a moderated listserv specializing in this. He included Smith's email in his next release. Cremers exercised editorial control both by deciding inclusion and also by editing the text as necessary.

He included a note that the FBI had been notified.

Normal address for Cremer's list was: securma@xs4all.nl

Smith's emailed reply to someone when he found out he was on the list:

I [was] trying to figure out how in blazes I could have posted me [sic] email to [the Network] bulletin board. I came into MSN through the back door, directed by a search engine, and never got the big picture. I don't remember reading anything about a message board either so I am a bit confused over how it could happen. Every message board to which I have ever subscribed required application, a password, and/or registration, and the instructions explained this is necessary to keep out the advertisers, cranks, and bumbling idiots like me.

Some months later, Batzel found out and contacted Cremers, who contacted Smith, who continued to claim that what he said was true. However, he did say that he had not intended his message for posting.

On hearing that, Cremers did apologize to Smith.

Batzel disputed having any familial relationship to any Nazis, and stated the artwork was not inherited.

Batzel sued in California state court:

Cremers filed in Federal District Court for:

He lost on all three counts. (Should he have? We'll return to the jurisdiction one later. Jurisdiction is a huge issue in libel law!). The district court judge ruled that Cremers was not an ISP and so could not claim §230 immunity.

Cremers then appealed the federal issues (anti-SLAPP, jurisdiction, §230) to the Ninth Circuit, which simply ruled that §230 meant Batzel had no case. (Well, there was one factual determination left for the District Court, which then ruled on that point in Cremers' favor.)

 This was the §230 case that set the (famous) precedent. This is a major case in which both Congress and the courts purport to "get it" about the internet. But note that there was a steady evolution:

  1. the law Congress intended
  2. the law Congress actually wrote down
  3. how the Ninth Circuit interpreted the law


IS Cremers like an ISP here? The fact that he is editing the list he sends out sure gives him an active role, and yet it was Prodigy's active-editing role that the CDA §230 was arguably intended to protect.

Cremers is an individual, of course, while Prodigy was a huge corporation. Did Congress mean to give special protections to corporations but not individuals?

Cremers was interested in the content on his list, but he did not create much if any of it.

Prodigy was interested in editing to create "family friendliness". Cremers edited basically to tighten up the reports that came in.

Why does Communications Decency Act have such a strong free-speech component? Generally free speech is something the indecent are in favor of.

The appellate case was heard by the Ninth Circuit (Federal Appellate court in CA, other western states); a copy is at http://cs.luc.edu/pld/ethics/BatzelvCremers.pdf.  (Page numbers in the sequal are as_printed/relative).

Judge Berzon:

[Opening (8431/4)] There is no reason inherent in the technological features of cyberspace why First Amendment and defamation law should apply differently in cyberspace than in the brick and mortar world. Congress, however, has chosen for policy reasons to immunize from liability for defamatory or obscene speech "providers and users of interactive computer services" when the defamatory or obscene material is "provided" by someone else.

Note the up-front recognition that this is due to Congress.

Section 230 was first offered as an amendment by Representatives Christopher Cox (R-Cal.) and Ron Wyden (D-Ore.). (8442/15)

Congress made this legislative choice for two primary reasons. First, Congress wanted to encourage the unfettered and unregulated development of free speech on the Internet, and to promote the development of e-commerce. (8443/16) ...

(Top of 8445/18) The second reason for enacting § 230(c) was to encourage interactive computer services and users of such services to self-police the Internet for obscenity and other offensive material

[extensive references to congressional record]

(8447/20): In particular, Congress adopted § 230(c) to overrule the decision of a New York state court in Stratton Oakmont, 1995

Regarding question of why a pro-free-speech clause was included in an anti-free-speech law (or, more precisely, addressing the suggestion that §230 shouldn't be interpreted as broadly pro-free-speech simply because the overall law was anti-free-speech):

(8445/18, end of 1st paragraph): Tension within statutes is often not a defect but an indication that the legislature was doing its job.

8448/21, start of section 2. To benefit from § 230(c) immunity, Cremers must first demonstrate that his Network website and listserv qualify as "provider[s] or user[s] of an interactive computer service."

The District court limited this to ISPs [what are they?]. The Circuit court argued that (a) Cremers was a provider of a computer service, and (b) that didn't matter because he was unquestionably a USER.

[end of class 7]

But could USER have been intended to mean one of the army of Prodigy volunteers who kept lookout for inappropriate content? It would do no good to indemnify Prodigy the corporation if liability then simply fell on the volunteer administrators of Prodigy's editing system. Why would §230 simply say "or user" when what was meant was a specific user who was distributing content?

8450/23, at [12] Critically, however, § 230 limits immunity to information "provided by another information content provider."

Here's one question: was Smith "another content provider"? You can link and host all you want, provided others have created the material for online use. But if Smith wasn't a content provider, then Cremers becomes the originator.

The other question is whether Cremers was in fact partly the "provider", by virtue of his editing. Note, though, that the whole point of §230 is to allow (family-friendly) editing. So clearly a little editing cannot be enough to void the immunity.

Answer to first question:

8450/23, 3rd paragraph: Obviously, Cremers did not create Smith's e-mail. Smith composed the e-mail entirely on his own. Nor do Cremers's minor alterations of Smith's e-mail prior to its posting or his choice to publish the e-mail (while rejecting other e-mails for inclusion in the listserv) rise to the level of "development."

More generally, the idea here is that there is simply no way to extend immunity to Stratton-Oakmont-type editing, or to removing profanity, while failing to extend immunity "all the way".

Is that actually true? [class discussion]

The Court considers some other partial interpretations of §230, but finds they are unworkable.

Second point:

8584/27, 3rd paragraph Smith's confusion, even if legitimate, does not matter, Cremers maintains, because the §230(c)(1) immunity should be available simply because Smith was the author of the e-mail, without more. We disagree. Under Cremers's broad interpretation of §230(c), users and providers of interactive computer services could with impunity intentionally post material they knew was never meant to be put on the Internet. At the same time, the creator or developer of the information presumably could not be held liable for unforeseeable publication of his material to huge numbers of people with whom he had no intention to communicate. The result would be nearly limitless immunity for speech never meant to be broadcast over the Internet. [emphasis added]

The case was sent back to district court to determine this point (which it did, in Cremer's favor).

8457/30, at [19] We therefore ... remand to the district court for further proceedings to develop the facts under this newly announced standard and to evaluate what Cremers should have reasonably concluded at the time he received Smith's e-mail. If Cremers should have reasonably concluded,  for example, that because Smith's e-mail arrived via a different e-mail address it was not provided to him for possible posting on the listserv, then Cremers cannot take advantage of the §230(c) immunities.


Judge Gould partial dissent in Batzel v Cremers:

Quotes:

The majority gives the phrase "information provided by another" an incorrect and unworkable meaning that extends CDA immunity far beyond what Congress intended.

(1) the defendant must be a provider or user of an "interactive computer service"; (2) the asserted claims must treat the defendant as a publisher or speaker of information; and (3) the challenged communication must be "information provided by another information content provider."2 The majority and I agree on the importance of the CDA and on the proper interpretation of the first and second elements. We disagree only over the third element.3

Majority: part (3) is met if the defendant believes this was the author's intention. Gould: This is convoluted! Why does the author's intention matter?

Below, when we get to threatening speech, we will see that the issue there is not the author's intention so much as a reasonable recipient's understanding.

The problems caused by the majority's rule would all vanish if we focused our inquiry not on the author's [Smith's] intent, but on the defendant's [Cremers'] acts [pld: emphasis added here and in sequel]

So far so good. But then Gould shifts direction radically:

We should hold that the CDA immunizes a defendant only when the defendant took no active role in selecting the questionable information for publication.

How does this help Prodigy with family-friendly editing or Stratton-Oakmont non-editing? Why not interpret (3) so the defendant is immunized if the author did intend publication on internet?

Can you interpret §230 so as to (a) restrict protection to cases when there was no active role in selection, and (b) solve the Stratton Oakmont problem? Discuss.

Gould: A person's decision to select particular information for distribution on the Internet changes that information in a subtle but important way: it adds the person's imprimatur to it

No doubt about that part. But Congress said that chat rooms, discussion boards, and listservs do have special needs.

And why then add the "and users" lanuage to the bill? These aren't users.

Gould: If Cremers made a mistake, we should not hold that he may escape all accountability just because he made that mistake on the Internet.

Did Congress decide to differ here?