Computer Ethics, Summer 2012

Corboy Law 602; Tuesdays & Thursdays, 6:00-9:00
Week 4, Class 7
      

Readings

  
Read Baase Chapter 3 on Speech
Read Baase Chapter 3 section 7 which includes patents

Some patent papers

These are also assigned reading.

1. Simpson Garfinkel, Patently Absurd, 1993
GARFINKEL, SIMSON
Garfinkel's article is pretty easy reading, pointing out some problems with software patents specifically.

2.Richard Stallman on PatentsSee
 full size image, 2002





Stallman is against software patents, of course. However, his case here is better than many open-source-related arguments; in fact, it is squarely aligned with the interests of software-development businesses.

3.Paul Graham, a computer scientist and one of the partners of the venture-capital firm Y Combinator, wrote a 2006 essay Are Software Patents Evil?

Graham makes the following claim early on:

One thing I do feel pretty certain of is that if you're against software patents, you're against patents in general. Gradually our machines consist more and more of software. Things that used to be done with levers and cams and gears are now done with loops and trees and closures. There's nothing special about physical embodiments of control systems that should make them patentable, and the software equivalent not.

Is this true?

Does it matter that Graham is also a radical proponent of using the lisp programming language, which everybody else stopped using in the 1990's?

Graham also says,

Frankly, it surprises me how small a role patents play in the software business. It's kind of ironic, considering all the dire things experts say about software patents stifling innovation, but when one looks closely at the software business, the most striking thing is how little patents seem to matter.

But that paragraph is about software companies being sued by other software companies, and not "patent trolls".

Graham also makes some other claims, in particular some about the role of the patent system in business competition generally. Check out what he says about Reveal.




RFID MTA hack? We'll come to this later, under "hacking". But see http://cs.luc.edu/pld/ethics/charlie_defcon.pdf (especially pages 41, 49, and 51) and (more mundane) http://cs.luc.edu/pld/ethics/mifare-classic.pdf.




Free Speech


Sexual material , including pornography (though that is a pejorative term) has been regulated for a long time.

Miller v California, Supreme Court 1973: this case established a three-part guideline for determining when something was legally obscene (as opposed to merely "indecent"):

For the internet, community standards is the problem: what community? This is in fact a huge problem, though it was already a problem with mail-order.

As the Internet became more popular with "ordinary" users, there was mounting concern that it was not "child-friendly". This led to the Communications Decency Act (CDA) (Baase p 151)


Communications Decency Act

In 1996 Congress passed the Communications Decency Act (CDA) (Baase p 151). It was extremely broad.

From the CDA:

[it is forbidden to be someone who] uses any interactive computer service to display in a manner available to a person under 18 years of age, any comment, request, suggestion, proposal, image, or other communication that, in context, depicts or describes, in terms patently offensive as measured by contemporary community standards...

WHICH COMMUNITY?

On the internet, you cannot tell how old someone is.

Butler v Michigan, 1957: SCOTUS struck down a law making it illegal to sell material (pornography) in Michigan solely because it might be harmful to minors.

CDA was widely viewed as an attempt by Congress to curry favor with a "Concerned Public", but knowing full well it was unlikely to withstand court scrutiny.

It did not. The Supreme Court ruled unanimously in 1997 that the censorship provisions were not ok: they were too vague and did not use the "least-restrictive means" available to achieve the desired goal.

Child Online Protection Act (COPA), 1998: still stuck with "community standards" rule. The law also authorized the creation of a commission; this was the agency that later wanted some of google's query data. The bulk of COPA was struck down.

CIPA: Child Internet Protection Act, 2000 (Baase, p 158) Schools that want federal funding have to install filters.

Filters are sort of a joke, though they've gotten better. However, they CANNOT do what they claim. They pretty much have to block translation sites and all "personal" sites, as those can be used for redirection; note that many sites of prospective congressional candidates are of this type. See peacefire.org. And stupidcensorship.com.

SCOTUS upheld CIPA in 2003.

The Chicago Public Library gave up on filters, but did install screen covers that make it very hard for someone to see what's on your screen. This both protects patron privacy AND protects library staff from what might otherwise be a "hostile work environment".

Baase has more on the library situation, p 157



Batzel v Cremers

One piece of the CDA survived: §230:

No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider. [Wikipedia]

Why is this there?

Note that there is no limit of §230 to any particular area of law, eg libel. (Actually, there are limits if the issue is copyright law, or criminal law.)

Note also that §230 addresses "publisher" liability and "author" liability. Another form, not exempted, is "distributor" liability.

The actual law is here: http://www.law.cornell.edu/uscode/47/usc_sec_47_00000230----000-.html. Note in particular the exemption sections (e)(1) and (e)(2). Note also that section 230  is titled "Protection for private blocking and screening of offensive material".


History of this as applies to protecting minors from offensive material

Cubby v CompuServe: 1991

District court only, New York State (Does anyone remember compuserve?) Giant pre-Internet BBS available to paid subscribers. The "rumorville" section, part of the Journalism Forum, was run by an independent company, Don Fitzpatrick Associates. Their contract guaranteed DFA had "total responsibility for the contents". Rumorville was in essence an online newspaper; essentially it was an expanded gossip column about the journalism industry. I have no idea who paid whom for the right to be present on CompuServe.

1990: Cubby Inc and Robert Blanchard plan to start a competing online product, Skuttlebut. This is disparaged in Rumorville. Cubby et al sue DFA & Compuserve for libel.

Compuserve argued they were only a distributor; they escaped liability. In fact, they escaped with Summary Judgement! The court ruled that they had no control at all over content. They are like a bookstore, or a distributor.

While CompuServe may decline to carry a given publication altogether, in reality, once it does decide to carry a publication, it will have little or no editorial control over that publication's contents. This is especially so when CompuServe carries the publication as part of a forum that is managed by a company unrelated to CompuServe.

CompuServe has no more editorial control over such a publication than does a public library, book store, or newsstand, and it would be no more feasible for CompuServe to examine every publication it carries for potentially defamatory statements than it would be for any other distributor to do so.

It was and is generally accepted that distributors have no liability for content (unless it can be proven that they encouraged the content).

(we'll come back to "distributor liability" later.)


Stratton Oakmont v Prodigy: New York state court, 1995. On a financial matters forum called "Money Talk," a Prodigy user (never identified) posted about Daniel Porush, the president of Stratton Oakmont, a financial services company. The remarks called Porush a "soon to be proven criminal" and that Stratton Oakmont was a "cult of brokers who either lie for a living or get fired"

Prodigy claimed the Compuserve defense in their motion for summary judgement.

Prodigy lost, because they promised to monitor for bad behavior on the board. At the very least, they CLAIMED to the public that they reserved the right to edit or remove messages. This was in fact part of Prodigy's family-oriented marketing. Prodigy was trying to do "family values" editing (including the deletion of profanity), and it cost them.

In legal terms, Prodigy was held to "publisher liability" rather than the weaker "distributor liability" because they CLAIMED to exercise editorial judgement.

Prodigy did have some internal confusion about whether they were for the "free expression of ideas" or were "family safe"

Prodigy's policy was to ban individual attacks, but not group attacks; anti-semitic rants did appear and were not taken down.

After Prodigy lost their motion for summary judgement, the case was settled; Prodigy issued a public apology. In Wall Street versus America by Gary Weiss, the claim is made that the settlement did not involve the exchange of money. See http://books.google.com/books?id=iOhGkYqaEdwC&pg=PA213&lpg=PA213&dq=wall+street+versus+america+porush&source=b...t=result, page 215: "No money changed hands. No money had to change hands."

Weiss also points out that four years later

... Porush and his partners were all carted off to federal prison. In 1999, Porush and other Stratton execs pleaded guilty to securities fraud and money laundering for manipulating a bunch of Stratton IPOs... Stratton really was a den of thieves. Porush really was a criminal. [italics in original - pld]


Enter the CDA. §230 was intended to encourage family-values editing, because after the Stratton Oakmont case most providers were afraid to step in.

Whether this was specifically to encourage providers to remove profanity & obscenity, the nominal targets of the CDA, or whether it was just a compensatory free-speech-positive clause in an overall free-speech- very-negative law is not clear.

Most of Congress apparently did not expect the CDA to withstand judicial scrutiny.

Congressional documents suggest fixing Stratton Oakmont precedent was the primary purpose of §230. However, arguably the reason for fixing Stratton Oakmont was to protect ISPs and websites that did try to provide a "family-friendly" environment.



Batzel v Cremers summary

Robert Smith was a handyman who worked for Ellen Batzel at her North Carolina home, doing repairs to house and vehicles, in 1999. Batzel's house was filled with large paintings in old frames that looked European.

Smith claims:

  1. Batzel told him that she was "the granddaughter of one of Hitler's right-hand men"
  2. He overheard Batzel tell someone that she was related to Heinrich Himmler (or else this was part of conversation #1)
  3. He was told by Batzel the paintings were "inherited"

Smith developed the theory that the paintings were artwork stolen by the Nazis and inherited by Batzel.

Smith had a dispute with Batzel [either about payments for work, or about Batzel's refusal to use her Hollywood contacts to help Smith sell his movie script]. It is not clear to what extent this dispute influenced Smith's artwork theory.

Smith sent his allegations about Batzel in an email to Ton Cremers, who ran a stolen-art mailing list. Smith found Cremers  through a search engine. This is still 1999.

Smith claimed in his email that some of Batzel's paintings were likely stolen by the Nazis. (p 8432 of the decision, Absolute Page 5)

Smith sent the email to securma@museum-security.org

Cremers ran a moderated listserv specializing in this. He included Smith's email in his next release. Cremers exercised editorial control both by deciding inclusion and also by editing the text as necessary.

He included a note that the FBI had been notified.

Normal address for Cremer's list was: securma@xs4all.nl

Smith's emailed reply to someone when he found out he was on the list:

I [was] trying to figure out how in blazes I could have posted me [sic] email to [the Network] bulletin board. I came into MSN through the back door, directed by a search engine, and never got the big picture. I don't remember reading anything about a message board either so I am a bit confused over how it could happen. Every message board to which I have ever subscribed required application, a password, and/or registration, and the instructions explained this is necessary to keep out the advertisers, cranks, and bumbling idiots like me.

Some months later, Batzel found out and contacted Cremers, who contacted Smith, who continued to claim that what he said was true. However, he did say that he had not intended his message for posting.

On hearing that, Cremers did apologize to Smith.

Batzel disputed having any familial relationship to any Nazis, and stated the artwork was not inherited.

Batzel sued in California state court:

Cremers filed in Federal District Court for:

He lost on all three counts. (Should he have? We'll return to the jurisdiction one later. Jurisdiction is a huge issue in libel law!). The district court judge ruled that Cremers was not an ISP and so could not claim §230 immunity.

Cremers then appealed the federal issues (anti-SLAPP, jurisdiction, §230) to the Ninth Circuit, which simply ruled that §230 meant Batzel had no case. (Well, there was one factual determination left for the District Court, which then ruled on that point in Cremers' favor.)

 This was the §230 case that set the (famous) precedent. This is a major case in which both Congress and the courts purport to "get it" about the internet. But note that there was a steady evolution:

  1. the law Congress intended
  2. the law Congress actually wrote down
  3. how the Ninth Circuit interpreted the law


IS Cremers like an ISP here? The fact that he is editing the list he sends out sure gives him an active role, and yet it was Prodigy's active-editing role that the CDA §230 was arguably intended to protect.

Cremers is an individual, of course, while Prodigy was a huge corporation. Did Congress mean to give special protections to corporations but not individuals?

Cremers was interested in the content on his list, but he did not create much if any of it.

Prodigy was interested in editing to create "family friendliness". Cremers edited basically to tighten up the reports that came in.

Why does Communications Decency Act have such a strong free-speech component? Generally free speech is something the indecent are in favor of.

The appellate case was heard by the Ninth Circuit (Federal Appellate court in CA, other western states); a copy is at http://cs.luc.edu/pld/ethics/BatzelvCremers.pdf.  (Page numbers in the sequal are as_printed/relative).

Judge Berzon:

[Opening (8431/4)] There is no reason inherent in the technological features of cyberspace why First Amendment and defamation law should apply differently in cyberspace than in the brick and mortar world. Congress, however, has chosen for policy reasons to immunize from liability for defamatory or obscene speech "providers and users of interactive computer services" when the defamatory or obscene material is "provided" by someone else.

Note the up-front recognition that this is due to Congress.

Section 230 was first offered as an amendment by Representatives Christopher Cox (R-Cal.) and Ron Wyden (D-Ore.). (8442/15)

Congress made this legislative choice for two primary reasons. First, Congress wanted to encourage the unfettered and unregulated development of free speech on the Internet, and to promote the development of e-commerce. (8443/16) ...

(Top of 8445/18) The second reason for enacting § 230(c) was to encourage interactive computer services and users of such services to self-police the Internet for obscenity and other offensive material

[extensive references to congressional record]

(8447/20): In particular, Congress adopted § 230(c) to overrule the decision of a New York state court in Stratton Oakmont, 1995

Regarding question of why a pro-free-speech clause was included in an anti-free-speech law (or, more precisely, addressing the suggestion that §230 shouldn't be interpreted as broadly pro-free-speech simply because the overall law was anti-free-speech):

(8445/18, end of 1st paragraph): Tension within statutes is often not a defect but an indication that the legislature was doing its job.

8448/21, start of section 2. To benefit from § 230(c) immunity, Cremers must first demonstrate that his Network website and listserv qualify as "provider[s] or user[s] of an interactive computer service."

The District court limited this to ISPs [what are they?]. The Circuit court argued that (a) Cremers was a provider of a computer service, and (b) that didn't matter because he was unquestionably a user.

But could user have been intended to mean one of the army of Prodigy volunteers who kept lookout for inappropriate content? It would do no good to indemnify Prodigy the corporation if liability then simply fell on the volunteer administrators of Prodigy's editing system. Why would §230 simply say "or user" when what was meant was a specific user who was distributing content?

8450/23, at [12] Critically, however, § 230 limits immunity to information "provided by another information content provider."

Here's one question: was Smith "another content provider"? You can link and host all you want, provided others have created the material for online use. But if Smith wasn't a content provider, then Cremers becomes the originator.

The other question is whether Cremers was in fact partly the "provider", by virtue of his editing. Note, though, that the whole point of §230 is to allow (family-friendly) editing. So clearly a little editing cannot be enough to void the immunity.

Here's the Ninth Circuit's answer to whether Cremers was the content provider [emphasis added]:

8450/23, 3rd paragraph: Obviously, Cremers did not create Smith's e-mail. Smith composed the e-mail entirely on his own. Nor do Cremers's minor alterations of Smith's e-mail prior to its posting or his choice to publish the e-mail (while rejecting other e-mails for inclusion in the listserv) rise to the level of "development."

More generally, the idea here is that there is simply no way to extend immunity to Stratton-Oakmont-type editing, or to removing profanity, while failing to extend immunity "all the way".

Is that actually true? [class discussion]

The Court considers some other partial interpretations of §230, but finds they are unworkable.

Second point:

8584/27, 3rd paragraph Smith's confusion, even if legitimate, does not matter, Cremers maintains, because the §230(c)(1) immunity should be available simply because Smith was the author of the e-mail, without more. We disagree. Under Cremers's broad interpretation of §230(c), users and providers of interactive computer services could with impunity intentionally post material they knew was never meant to be put on the Internet. At the same time, the creator or developer of the information presumably could not be held liable for unforeseeable publication of his material to huge numbers of people with whom he had no intention to communicate. The result would be nearly limitless immunity for speech never meant to be broadcast over the Internet. [emphasis added]

The case was sent back to district court to determine this point (which it did, in Cremer's favor).

8457/30, at [19] We therefore ... remand to the district court for further proceedings to develop the facts under this newly announced standard and to evaluate what Cremers should have reasonably concluded at the time he received Smith's e-mail. If Cremers should have reasonably concluded,  for example, that because Smith's e-mail arrived via a different e-mail address it was not provided to him for possible posting on the listserv, then Cremers cannot take advantage of the §230(c) immunities.


Judge Gould partial dissent in Batzel v Cremers:

Quotes:

The majority gives the phrase "information provided by another" an incorrect and unworkable meaning that extends CDA immunity far beyond what Congress intended.

(1) the defendant must be a provider or user of an "interactive computer service"; (2) the asserted claims must treat the defendant as a publisher or speaker of information; and (3) the challenged communication must be "information provided by another information content provider."2 The majority and I agree on the importance of the CDA and on the proper interpretation of the first and second elements. We disagree only over the third element.3

Majority: part (3) is met if the defendant believes this was the author's intention. Gould: This is convoluted! Why does the author's intention matter?

Below, when we get to threatening speech, we will see that the issue there is not the author's intention so much as a reasonable recipient's understanding.

The problems caused by the majority's rule would all vanish if we focused our inquiry not on the author's [Smith's] intent, but on the defendant's [Cremers'] acts [pld: emphasis added here and in sequel]

So far so good. But then Gould shifts direction radically:

We should hold that the CDA immunizes a defendant only when the defendant took no active role in selecting the questionable information for publication.

How does this help Prodigy with family-friendly editing or Stratton-Oakmont non-editing? Why not interpret (3) so the defendant is immunized if the author did intend publication on internet?

Can you interpret §230 so as to (a) restrict protection to cases when there was no active role in selection, and (b) solve the Stratton Oakmont problem? Discuss.

Gould: A person's decision to select particular information for distribution on the Internet changes that information in a subtle but important way: it adds the person's imprimatur to it

No doubt about that part. But Congress said that chat rooms, discussion boards, and listservs do have special needs.

And why then add the "and users" lanuage to the bill? These aren't users.

Gould: If Cremers made a mistake, we should not hold that he may escape all accountability just because he made that mistake on the Internet.

Did Congress decide to differ here?





The (potential) corporate liability for sexual harassment is perhaps the most frequently cited justification for lack of employee privacy regarding company email.

Should this liability be there, in light of §230? Does §230 mean that a company cannot be found liable as publisher or speaker for email created by employees?

Arguably, the main issue here is a "hostile work environment", which is a none-of-the-above in terms of publisher, author, or distributor liability. This is an important point regarding the extent of §230 immunity. Companies are not being found liable as publisher or author, but rather for "tolerating" the authorship.


Since this case, there have been MANY others decided by application of this decision. See eff.org's section on Free Speech, http://www.eff.org/issues/free-speech.

There have also been many attacks on §230 immunity. Some limitations may come, someday.

Publisher liability (except when eliminated by §230) exists even without knowledge of defamatory material's inclusion.

Distributor liability is not exempted by §230. It is liability for knowingly distributing defamatory material. However, in Zeran v AOL (below), the courts found that prior notice doesn't automatically make for distributor liability.

Most likely approach to attack §230 immunity (2010): distributor liability.



Is there another interpretation of §230 that is more conservative?

1. Limiting protection to genuine ISP-like services (perhaps run by individuals). But the law has the phrase "or user"; is that consistent?

2. Limiting protection where the provider does not actively select material, but only removes material posted by others. This might have been what some in Congress had in mind, but is it workable?


§230 odds and ends

There have been attacks on the §230 defense, but courts have been unwilling to date to allow exceptions, or to restrict coverage to "traditional ISPs" where there is zero role in selection of the other material being republished.

There is still some question though about what happens if you do actively select the material. Cremers played a very limited editorial role. What if you go looking for criticism of someone and simply quote all that? And what if you're a respected blogger and the original sources were just Usenet bigmouths?

EFF: One court has limited §230 immunity to situations in which the originator "furnished it to the provider or user under circumstances in which a reasonable person...would conclude that the information was provided for publication on the Internet...."

Be wary, too, of editing that changes the meaning. Simply deleting some statements that you thought were irrelevant but which the plaintiff thought were mitigating could get you in trouble!


Zeran v AOL

This was a §230 case that expanded the rules to include at least some distributor liability. The ruling was by the Fourth Circuit.

Someone posted a fake ad for T-shirts with tasteless slogans related to the Oklahoma City bombing, listing Kenneth Zeran's home number. Zeran had nothing to do with the post (although it is not clear whether the actual poster used Zeran's phone intentionally). For a while Zeran was getting hostile, threatening phone calls at the rate of 30 per hour.

Zeran lost his initial lawsuit against AOL.

Zeran appealed to the 4th circuit, arguing that §230 leaves intact "distributor" liability for interactive computer service providers who possess notice of defamatory material posted through their services.

Publisher liability: liability even without knowledge of defamatory material's inclusion:

Distributor liability: liability for knowingly distributing defamatory material

Zeran argued that AOL had distributor liability once he notified them of the defamatory material.

Zeran lost. In part because he "fails to understand the practical implications of notice liabililty in the interactive-computer-service context"; note that the court here once again tried to understand the reality of the internet. The court also apparently felt that AOL was still acting more as publisher than distributor, at least as far as §230 was concerned.


Still to be resolved: what if I quote other defamatory speakers on my blog in order to "prove my point"? Batzel v Cremers doesn't entirely settle this; it's pretty much agreed Cremers did not intend to defame Batzel.

There's also the distributor-liability issue left only partly settled in Zeran.

Barrett v. Rosenthal, Nov. 20, 2006: California supreme court affirms core §230 ruling

Rosenthal posted statements on Internet newsgroups about two doctors who operated Web sites aimed at exposing fraud in alternative medicine. Her posts quoted an allegation by Tim Bolen that one of the doctors engaged in "stalking".

From http://www.gannett.com/go/newswatch/2006/november/nw1130-3.htm

In the case before the California Supreme Court, the doctor [Barrett] claimed that by warning Rosenthal that Bolen's article was defamatory, she "knew or had reason to know" that there was defamatory content in the publication. Under traditional distributor liability law, therefore, Rosenthal should therefore be responsible for the substance of Bolen's statements, the doctor claimed. The court rejected the doctor's interpretation, saying that the statute rejects the traditional distinction between publishers and distributors, and shields any provider or user who republishes information online. The court acknowledged that such "broad immunity for defamatory republications on the Internet has some troubling consequences," but it concluded that plaintiffs who allege "they were defamed in an Internet posting may only seek recovery from the original source of the statement."

Barrett could still sue Bolen. But Bolen might not have had any money, and Barrett would have to prove that Bolen's original email, as distributed by Bolen, was defamatory. If Bolen sent it privately, or with limited circulation, that might be difficult.

See also wikipedia article http://en.wikipedia.org/wiki/Barrett_v._Rosenthal

Rosenthal was arguably even more of an Ordinary User than Ton Cremers.


Jane Doe v MySpace: §230 applies to liability re physical harm

Jane Doe acting on behalf of Julie Doe, her minor daughter She was 13 when she created a myspace page, 14 when she went on a date with someone age 19 who then assaulted her. On the face of it, Doe claims that the suit is about MySpace failing to protect children, or for failing to do SOMETHING. But the court held that it's really about lack of liability for Julie Doe's posting. Note that this isn't libel law at all. The court argued that:

It is quite obvious that the underlying basis of Plaintiff's claims is that, through postings on MySpace, *** and Julie Doe met and exchanged personal information which eventually led to ... the sexual assault.

Therefore the case is in fact about publication, and therefore MySpace is immune under §230.


Similar case (Doe v Bates): Yahoo was sued because someone posted child pornography on a yahoo group. (Note that Yahoo here is a traditional ISP). ("Doe" represented the anonymized parents of an alleged child victim.)



Here's a §230 case from http://www.entrepreneur.com/tradejournals/article/189703316_3.html [dead link?] dealing with websites that allowed anonymous postings:

In Donato [v Moldow], two members of the Emerson Borough Council [New Jersey] sued a Web site operator and numerous individuals after they used pseudonyms when posting on the Web site for "defamation, harassment, and intentional infliction of emotional distress." (74) The appellants argued that Stephen Moldow, the website operator, was liable for the damages because he was the publisher of the website. (75) Much to their chagrin, the trial judge found that Moldow was immune from liability under the Communications Decency Act, (76) and the appellate court agreed. (77) The court reasoned that:

The allegation that the anonymous format encourages defamatory and otherwise objectionable messages 'because users may state their innermost thoughts and vicious statements free from civil recourse by their victims' does not pierce the immunity for two reasons: (1) the allegation is an unfounded conclusory statement, not a statement of fact; and (2) the allegation misstates the law; the anonymous posters are not immune from liability, and procedures are available, upon a proper showing, to ascertain their identities. (78)

Note that Moldow was merely the operator here; he was not doing anything to select content.


Here's a discussion of whether it is time to rein in §230: http://arstechnica.com/tech-policy/news/2009/03/a-friendly-exchange-about-the-future-of-online-liability.ars.
The participants are Adam Thierer of the Progress & Freedom Foundation and John Palfrey of Harvard law School. Palfrey believes §230 needs to be modified is cases like Jane Doe v MySpace, where Doe's daughter was assaulted due to material published on MySpace (specifically, due to email exchanges between Doe's daughter and the perpetrator). Palfrey believes that such cases should be heard by the courts, but that the steps MySpace took to protect minors would be taken into consideration. Note that Palfrey apparently believes in the fairness and appropriateness of the legal system; many ISPs, on the other hand, don't agree and would do just about whatever it took to make sure cases never arose.

At the bottom of the last page, Palfrey suggests some alternatives for §230.

Here's an example of §230 being used to defend event-ticket resellers; the claim is that the sites in question are essentially just auction sites, and that the actual reseller was the person who offered their ticket online.
http://cyberlaw.stanford.edu/packet/200810/section-230-cda-may-%E2%80%93-or-may-not-%E2%80%93-immunize-online-marketplace-provide



Note that §230 grants immunity without requiring any balancing obligations. There is no "takedown" requirement for "internet providers and users" to remove defamatory content on request, as there is for example in OCILLA (the DMCA version). There is not even a requirement that the internet provider/user cooperate with an investigation of the alleged defamation.


Dart v Craigslist, 2009

Cook County Sheriff Tom Dart sued Craigslist for hosting advertisements for prostitution. Dart also claimed that Craigslist took on a more active role than simply publishing by virtue of maintaining catetories like "adult services" and "w4m" (women for men), and by providing a search function that might enable users to search for ads that used codewords for prostitution (eg "roses" or "diamonds" as stand-ins for "dollars").

What do you think of the idea that providing a search function causes a site to lose §230 immunity? How would

Craigslist ads are generally free. However, they began charging in 2008 for adult-services ads, at least in part because payment would make it difficult or impossible for posters to remain anonymous (bitcoin notwithstanding). Overall, it seems Craigslist was not happy with these ads, but did not implement an "idividual review" policy until 2011.

The CDA does include the following:

(3) State law
Nothing in this section shall be construed to prevent any State from enforcing any State law that is consistent with this section. No cause of action may be brought and no liability may be imposed under any State or local law that is inconsistent with this section.

The first sentence here suggests that the CDA is not intended to interfere with state laws against prostitution. The second, however, suggests that §230 protections are indeed intended to apply to online speech cases that may run afoul of state laws.

Prostitution is a violation of state law, of course, and Dart's complaint stated that Craigslist itself was "solicit[ing] for a prostitute", under the broader meaning of "soliciting". Craigslist, Dart claimed, was also "knowingly assisting" others in finding prostitutes, also against state law. The federal district court did not buy this argument. From the decision of Judge John Grady:

"Facilitating" and "assisting" encompass a broader range of conduct, so broad in fact that they include the services provided by intermediaries like phone companies, ISPs, and computer manufacturers. Intermediaries are not culpable for "aiding and abetting" their customers who misuse their services to commit unlawful acts. [p 14]

The court did however point to the fact that Craigslist specifically and repeatedly warned users not to post prostitution ads or other illegal ads. Should they have to include such warnings?

The court also made reference to Does v GTE, in which a previous court ruled

that it was inconsistent with the statue's apparent purpose to encourage monitoring ("Protection for 'good samaritan' blocking and screening of offensive material") to read §230(c)(1) to immunize internet-serice providers who do nothing to monitor the content they make available to the public [emphasis added by pld; from Dart v Craigslist p 12]

What do you think of that potential §230 limitation: that to receive §230 protection you must do at least some content monitoring? If you don't, you can maybe fall back on the Cubby v Compuserve defense that you have only distributor liability. Should the "distributor" classification apply to a site like Craigslist if they did no monitoring?


Chicago Lawyers v Craigslist, 2006-2008

Craigslist has also been sued for posting housing ads that contained discriminatory language (from "no minorities" to "clean godly Christian male" to "no children"); most (all?) of these cases were also set aside on §230 grounds. One case was brought in 2006 by the Chicago Lawyers' Committee for Civil Rights; the Seventh Circuit (in Chicago) ruled that §230 protected Craigslist.

Sort of. The Seventh Circuit's decision included analysis that might be unnecessary if a strict §230 protection was applied. The Court noted that there were 30 million Craigslist posts a month, and that "fewer than 30 people ... operate the system". They did note that "Neither side's argument finds much support in the statutory text" of §230; the Seventh Circuit is clearly unhappy with the interpretation of §230 as a form of immunity. Instead, they suggest "[w]hy not read §230(c)(1) as a definitional clause rather than as an immunity from liability", that is, it declares that an ISP is not an author and not a publisher. They also hinted that full §230 protection might apply only to those who did some content monitoring.

Still, at the end of the day the Seventh Circuit went along with the usual interpretation of §230:

What §230(c)(1) says is that an online information system must not “be treated as the publisher or speaker of any information provided by” someone else. Yet only in a capacity as publisher could craigslist be liable under [the Fair Housing Act].

It is possible that the Seventh Circuit would be less inclined to reject online distributor liability than the Fourth or Ninth Circuits.


Criminal Libel

There is such a thing! From http://law.jrank.org/pages/1563/Libel-Criminal.html:

At common law, libel was recognized as a criminal misdemeanor as well as an individual injury justifying damages (a tort). Prosecutions of the offense had three goals: protection of government from seditious statements capable of weakening popular support and causing insurrection; reinforcement of public morals by requiring a "decent" mode of community discourse; and protection of the individual from writings likely to hold him up to hatred, contempt, or ridicule. The protection of the individual, a goal that is generally left to tort law, was justified by the criminal law's responsibility for outlawing statements likely to provoke breaches of peace.

It's hard to see how anything on the internet could result in an immediate breach of the peace, as compared, say, to leafleting at a protest march, or using a bullhorn to incite a crowd. Criminal libel prosecutions have been extremely rare for the past ~70 years. When they do occur, it usually represents either an overzealous police department or someone rich and powerful who doesn't want to bring a civil suit directly. Under criminal-libel laws, the government foots the bill for what arguably should be the plaintiff's position.

Criminal Libel is sometimes justified as (and sometimes limited to) a way of protecting the reputations of the dead; living people can sue.

See http://www.firstamendmentcenter.org/commentary.aspx?id=12468 for a 2003 example at the University of North Colorado involving a new satirical newsletter published by Thomas Mink:

To spice up the first issue, Mink doctored a photograph of well-known UNC finance professor Junius Peake so that he resembled Gene Simmons of KISS in full makeup. Mink described his digital creation as “Junius Puke,” editor in chief of the publication.

(See http://webspace.webring.com/people/jt/thehowlingpig)

The police charged Mink, but the local prosecutor insisted that Mink "was in no danger of prosecution"; ie, his office would never have followed up on prosecuting the case. However, this was less clear to Mink, and the original arrest and equipment seizure was apparently solely for criminal libel. Mink's case was not dropped until he went before a federal judge in Colorado.

Colorado is apparently serious about this. From 2008, at http://www.firstamendmentcenter.org/news.aspx?id=20937:

FORT COLLINS, Colo. — A man (J.P. Weichel) accused of making unflattering online comments about his ex-lover and her attorney on Craigslist has been charged with two counts of criminal libel. ... Police obtained search warrants for records from Web sites including Craigslist before identifying Weichel as the suspect.

Note that a search warrant cannot be obtained in a civil suit!

The doctrine of criminal libel is severely at odds with free speech. Nonetheless, it may be on the rise, as states see it as the only way to rein in the runaway Internet libel released by §230.

Another libel legal theory is that of group libel: you can be sued if you make defamatory remarks about a group of people (eg a racial/ethnic/religious group), without singling out any specific individual. The courts have over the years not been terribly receptive to this theory.



There is such a thing as conventional libel on the internet, and there are many lawsuits involving it. The point of §230 is that you can avoid the risk by quoting someone else.

Does the Mink case involve §230, or could Peake have sued Mink directly for defamation?

Earlier we looked at Dozier Internet Law; one of their clients, Sue Scheff, sued Carey Bock over defamation, and won an 11.3 million dollar judgement. By default, as it turned out, as Bock
, in the aftermath of losing her home to Hurricane Katrina, could not afford to travel to Florida to defend herself.


Google Conviction in Italy

On February 24, 2010, three executives of Google were convicted in Italy of violating criminal privacy laws; each received a six-month suspended sentence. At issue was Youtube's delay in removing a video in 2006 of four youths beating a boy with autism and/or Downs syndrome.

Google complied with a request from Italian police for removal of the video, but possibly was not so prompt in responding to earlier requests. Under Italian privacy law, videos cannot be posted online without the consent of all participants (Illinois has a similar law regarding audio recordings, though the Illinois law simply forbids the act of recording itself).

The Italian prosecutor's argument was that, through advertising revenue, google profited from the video, and thus was criminally responsible.

In the US, §230 of the CDA makes Google immune to civil prosecution in such cases; free-speech rights ensure Google would be immune to criminal prosecution.

The European Union has issued Directive 2000/31/EC, dated June 8, 2000 (http://eur-lex.europa.eu/LexUriServ/LexUriServ.do?uri=CELEX:32000L0031:EN:HTML), which was intended in part to limit ISP liability. See paragraph 40, for example. However, the directive is rambling and quite lengthy, and the fact that google profited from the Youtube video through advertising seems to have been interpreted by the Italian prosecutor as voiding ISP status. Note that prologue paragraph 40 states that one goal is the "development of rapid and reliable procedures for removing and disabling access to illegal information". The last phrase is quite striking in and of itself: what makes information intrinsically illegal? Prologue paragraph 42 states

The exemptions from liability established in this Directive cover only cases where the activity of the information society service provider is limited to the technical process of operating and giving access to a communication network over which information made available by third parties is transmitted or temporarily stored, for the sole purpose of making the transmission more efficient; this activity is of a mere technical, automatic and passive nature, which implies that the information society service provider has neither knowledge of nor control over the information which is transmitted or stored.

The relevant part of the actual directive is as follows:

Article 12

"Mere conduit"

1. Where an information society service is provided that consists of the transmission in a communication network of information provided by a recipient of the service, or the provision of access to a communication network, Member States shall ensure that the service provider is not liable for the information transmitted, on condition that the provider:

(a) does not initiate the transmission;

(b) does not select the receiver of the transmission; and

(c) does not select or modify the information contained in the transmission.


The New York Times has suggested that one issue is Italian prime minister Silvio Berlusconi's control of television and traditional media, which compete with the internet. The case at hand, if upheld, could make YouTube unavailable in Italy.

Finally, note that, by all accounts, YouTube has been very successful in filtering out nudity, even mild forms. They could probably figure out how to filter other things, if they really wanted to.

Resources:
How does this google/youtube problem differ from the problem faced by craigslist? Note that the only reason craigslist ran into trouble, rather than, say, ebay, was that the latter does not provide useful local listings (and is not free to listers).




spam

1996: aol v Cyber Promotions (Baase, p 161)
Note that CP sued aol for blocking CP's spam! Eventually AOL sued CP.

Intel-Hamidi case: Ken Hamidi sent email to 30,000 intel employees. Intel sued. It eventually reached the California Supreme Court, who ruled in Hamidi's favor.

Harris Interactive sued the Mail Abuse Prevention System, for blocking their opinion-poll email. One interesting claim by Harris is that they were "turned in" to MAPS by a competitor. Harris dropped the suit.

CAN-SPAM act

People have a right to send email. Sort of. Maybe not companies, though? 



Regulated classes of speech

All these categories are things that, once upon a time, private individuals seldom if ever got caught up in.

p 166: Commodity-Futures Trading Commission (CFTC): they required that, if you wrote about commodity futures, you needed a license. The regs were originally intended to cover traders, but CFTC applied them to newsletters too, and then the web. (These latter rules were deleted in 2000.)

New York State outlawed not only the direct sale of wine from out-of-state-wineries to New Yorkers, but also the advertising. What about web pages?

p 176: political campaign laws. Anything you do that is "coordinated" with a political campaign is considered to be a contribution. These are subject to limitations, and to reporting requirements.

McCain-Feingold: you cannot even mentioning a candidate's name or face within 60 days of an election.

In 2004, the Federal Election Commission was ordered by a judge to write rules extending the McCain-Feingold rules to the Internet.

How would this affect bloggers? Would they be silenced?

Note that the opposing candidates are VERY likely to file complaints.

2006 FEC rules on the internet: it's ok as long as you aren't paid, EVEN IF political activity is "in coordination with" the candidate.

2007: Supreme court struck down the McCain-Feingold restriction on issue ads.

2010: Supreme Court struck down most remaining restrictions on corporate speech

Home selling: if you list your house online, do you need a real-estate license?



Libel and Internet complaints about corporations

A selected few "sucks" sites. Search for (large company name) + "sucks" to find more.

mcspotlight.org
mclibel
uopsucks.com (university of phoenix)
placeholder site, but see here
walmartsucks.com
placeholder site
walmartsucks.org
you betcha!
gmsucks.net
domain lookup error
lyingscumbags.com
Ah, but there are anti-GM sites! Well, were.
fordREALLYsucks.com
going strong!
mychryslersucks.com
mine doesn't, though! 1990 and still going strong!
ibmsucks.org
active!
microsoftsucks.org
tied to applesucks.org
applesucks.org
tied to microsoftsucks.org
googlesearchsucks.com
maybe some evil after all?
paypalsucks.com
these folks are really ticked off!
bankofamericasucks.com
everything is user-contributed
whylinuxsucks.org
a serious site on linux improvement

How can these sites get away with this? Sometimes, suing just does not have the desired effect.

The McLibel case

Unemployed ex-postman Dave Morris and part-time bar worker Helen Steel called McDonald's a multinational corporate menace - abusing animals, workers and the environment and promoting an unhealthy diet.
http://www.organicconsumers.org/mclib.html

[NB: why are Morris & Steel identified above by their occupations?]

They were distributing pamphlets claiming:

Note that their story had NOTHING to do with the internet!

McDonalds had done a great deal of investigating; they had hired spies to infiltrate London Greenpeace to get names of members involved. This wasn't entirely coordinated; two spies spied on each other for an extended period. Another spy had a long romantic relationship with a real member.

Morris & Steel raised £35,000 for their defense, most of which apparently went to paying for transcripts.

From http://mcspotlight.org/case/trial/story.html:

Mr Justice Bell took two hours to read his summary to a packed court room. He ruled that Helen and Dave had not proved the allegations against McDonald's on rainforest destruction, heart disease and cancer, food poisoning, starvation in the Third World and bad working conditions. But they had proved that McDonald's "exploit children" with their advertising, falsely advertise their food as nutritious, risk the health of their most regular, long-term customers, are "culpably responsible" for cruelty to animals, are "strongly antipathetic" to unions and pay their workers low wages.

And so, Morris & Steel were held liable for £60,000 in damages.

Morris & Steel raised £35,000 for their defense, most of which apparently went to paying for transcripts.

On 15th February 2005, the European Court of Human Rights in Strasbourg declared that the mammoth McLibel case was in breach of the right to a fair trial and right to freedom of expression.


The phrase Libel Terrorism is a play on "libel tourism", the practice of suing for libel in the UK (or another friendly venue, though it's hard to beat the UK's "defendant must prove truth" doctrine, plus the "plaintiff need not prove malice" part). 

New York now has the Libel Terrorism Protection Act.

Case: Sheikh Khalid bin Mahfouz v Rachel Ehrenfeld

Rachel Ehrenfeld wrote Funding Evil, a rather polemical book about how terrorist organizations gain funding through drug trafficking and other illegal strategies. The first edition appeared in 2003. The book apparently alleges that Sheik Khalid bin Mahfouz is a major participant in terrorist fundraising.  Mahfouz sued in England, although the book was not distributed there; however, 23 copies were ordered online from the US. In 2005 the court in England found in Mahfouz's favor, describing Ehrenfeld's defense as "material of a flimsy and unreliable nature" (though some of that may have been related to the costs of mounting a more credible defense, and Ehrenfeld's conviction that no such defense should be necessary), and ordered Ehrenfeld to pay $225,000.

Ehrenfeld filed a lawsuit against Mahfouz in the US, seeking a declaration that the judgement in England could not be enforced here. The case was dismissed because the judge determined that the court lacked jurisdiction over Mahfouz. A second ruling arriving at the same conclusion came in 2007.

In May 2008, New York state passed the Libel Tourism Protection Act, that offers some form of protection against enforcement in New York state of libel claims from other countries. However, Mahfouz has not sought to collect, and probably will not.


gatt.org, and cyberhoaxes

(compare wto.org and wipo.int)

Is this funny? Or serious? Are there legitimate trademark issues?

Note that it keeps changing.

Try to find the links that are actually there.
gatt.org links and Dow's Acceptable Risk seem pretty permanent.



Planned Parenthood v American Coalition of Life Activists

With libel, §230 has been interpreted as saying you have immunity for posting material originated from someone else, if your understanding was that the other party intended the material for posting.

With "threat speech", the courts have held that speech qualifies as that if a reasonable listener (or reader) feels that a threat is intended. Your intentions may not count at all.

In the case Planned Parenthood v American Coalition of Life Activists (ACLA, not to be confused with ACLU, the Americal Civil Liberties Union), Planned Parenthood sued ACLA for a combination of "wanted" posters and a website that could be appeared as threatening abortion providers. In 1993 a "wanted" poster for Dr David Gunn, Florida, was released and Dr Gunn was later murdered. Also in 1993, a wanted poster for Dr George Patterson was released and Dr Patterson was subsequently murdered. In 1994 a poster for Dr John Britton, Florida, was released and Dr Britton was later murdered, along with James Barrett.

I've never been able to track down any of these individual posters (which is odd in and of itself), but here's a group one:

The Deadly Dozen


When US Rep Gabrielle Giffords (D, AZ) was shot in January 2011, some people pointed to the poster below from Sarah Palin's site, and from her twitter line, Don't Retreat, Instead - RELOAD! A June 2010 post from Giffords election opponent Jesse Kelly said, "Get on Target for Victory in November Help remove Gabrielle Giffords from office Shoot a fully automatic M16 with Jesse Kelly [sic]"

But there are multiple differences. Perhaps the most important is that no new crosshair/target/wanted-style posters have been released by anyone since the Tucson shootings. Under what circumstances might people view this kind of poster as a threat? Should candidates and political-action committees be required to address perceived threats?

targeting congress with crosshairs


After the murders of Drs Gunn, Patterson and Britton, the name of the abortion provider was displayed as strikethrough on a website run by Neal Horsley.

Why would a judge issue rules on what typestyle (eg strikethrough) a website could use? Did the judge in fact issue that ruling, or is that just an exaggeration from the defendants? The actual injunction (from the DC judge ruling link, below) states

In addition, defendants are enjoined from publishing, republishing, reproducing and/or distributing in print or electronic form the personally identifying information about plaintiffs contained in Trial Exhibits 7 and 9 (the Nuremberg Files) with a specific intent to threaten. [emphasis added by pld]

That is much more general than just "no strikethrough", though the strikethrough was widely interpreted as a "specific intent to threaten". But intent is notoriously hard to judge, and in fact (as we shall see) the case ended up hinging more on the idea that Horsley's site would be interpreted as a threat by a neutral observer.

The "Nuremberg" website was founded by Horsley with the nominal idea of gathering evidence for the day when abortion providers might be tried for "crimes against humanity". (In such cases, the defense "it was legal at the time" is not accepted.)

In 1998, Dr Bernard Slepian was killed at home. The day before, according to Horsley, his only intent was that cited above; the day after, he added Dr Slepian's name, with a strikethrough. Slepian's name had not been there before, leading Horsley to protest very strongly that his site could not have been a threat against Slepian. (The lawsuit was filed by other physicians who felt it was a threat to them; Horsley is silent on this.)

Original site: christiangallery.com, christiangallery.com/atrocity, /atrocity/aborts.html
Archived site without strikethrough: cs.luc.edu/pld/ethics/nuremberg/aborts.html (though the strikethroughed page follows!)
Achived site with strikethrough: cs.luc.edu/pld/ethics/nuremberg/aborts2.html (Dr Gunn is col 2 row 8).
Archived site of Horstley's own before-and-after: cs.luc.edu/pld/ethics/nuremberg/californicate.htm (part of his own attempt to justify his site to the public).

After looking at these, consider Horsley's claim,

All we’ve done, and all really anybody’s accused us of doing, is printing factually verifiable information... If the First Amendment does not allow a publisher to publish factually verifiable information, then I don’t understand what the First Amendment’s about.

Do you think this is an accurate statement?

The civil case was filed in 1995, after some abortion providers had been murdered (eg Dr Hugh Short) and "wanted" posters were issued by ACLA for others. There was a federal law, the 1994 Federal Freedom of Access to Clinic Entrances Act (FACE), that provided protections against threats to abortion providers.

Horsley's site was created in 1997, and was added to the case. By 1997, the internet was no longer new, but judges were still having difficulty figuring out what standards should apply.

Horsley's actual statements are pretty much limited to facts and to opinions that are arguably protected. He does not appear to make any explicit calls to violence.

Planned Parenthood claimed the site "celebrate[s] violence against abortion providers".

For a while, Horsley was having trouble finding ISPs willing to host his site. The notion of ISP censorship is an interesting one in its own right. The Stanford site, below, claims that OneNet, as the ISP (carrying traffic only) for the webhosting site used by Horsley,demanded that Horsley's content be removed.

Here's a Stanford student group's site about the case. The original lawsuit was brought in 1995 by Planned Parenthood (and some abortion providers) against American Coalition of Life Activists (ACLA) et al. Horsley was not party to that suit; his Nuremberg site was in fact not created until 1997. The original lawsuit was over threatening "Wanted" posters depicting abortion providers; Horsley's site (but not Horsley himself) was added later. In retrospect it seems reasonable to think that, if it were not for the context created by the "Wanted" posters, there would have been no issue with the Nuremberg Files web pages.

The central question in the case is whether the statements amounted to a "true threat" that met the standard for being beyond the bounds of free-speech protection.

The District Court judge (1999) gave the jury instructions to take into account the prevailing climate of violence against abortion providers; the jury was also considering not an ordinary civil claim but one brought under the Freedom of Access to Clinic Entrances act (FACE), which allows lawsuits against anyone who "intimidates" anyone providing an abortion. (The first-amendment issue applies just as much with the FACE law as without.) The jury returned a verdict against the ACLA for $100 million, and the judge granted a permanent injunction against the Nuremberg Files site (Horsley's).

DC Judge (full order at http://webpages.cs.luc.edu/~pld/ethics/nuremberg/PPvACLA_trial.html):

I totally reject the defendants' attempts to justify their actions as an expression of opinion or as a legitimate and lawful exercise of free speech in order to dissuade the plaintiffs from engaging in providing abortion services.

See also the following paragraph. 

Under current free-speech standards, you ARE allowed to threaten people. You ARE allowed to incite others to violence.

You are NOT allowed to incite anyone to imminent violence, and you are NOT allowed to make threats that you personally intend to carry out.


The case was appealed to a 9th Circuit 3-judge panel, which overturned the injunction. Judge Kosinski wrote the decision, based on NAACP v Claiborne Hardware, SCOTUS 1982.

NAACP v Claiborne Hardware synopsis: The NAACP had organized a boycott in 1968 of several white-owned businesses, and had posted activists to take down names of black patrons; these names were then published and read at NAACP meetings. The NAACP liaison, Charles Evers [brother of Medgar Evers] had stated publicly that those ignoring the boycott would be "disciplined" and at one point said "If we catch any of you going in any of them racist stores, we're gonna break your damn neck." The Supreme Court found in the NAACP's favor, on the grounds that there was no evidence Evers had authorized any acts of violence, or even made any direct threats (eg to specific individuals).

Judge Kozinski argued that whatever the ACLA was doing was less threatening than what Evers was doing, and therefore dismissed the case.

However, another feature of the Claiborne case was that, while there were several incidents of minor violence directed at those who were named as violators of the boycott, in fact nobody was seriously harmed. Furthermore, the case was brought by the merchants, who had experienced essentially no violence whatsoever; the Supreme Court found that the nonviolent elements of the boycott were protected speech.


The full Ninth Circuit then heard the case, en banc.

Ruling is by judge Rymer, dissents by judges Reinhardt, Kozinski (writer of the decision of the three-judge panel that heard the case), and Berzon (of Batzel v Cremers)

See http://webpages.cs.luc.edu/~pld/ethics/nuremberg/PPvACLA_9th_enbanc.pdf

5 pages of plaintiffs / defendants

Here's Rymer's problem with the NAACP v Claiborne analogy: 7121/41, at [8]

Even if the Gunn poster, which was the first "WANTED" poster, was a purely political message when originally issued, and even if the Britton poster were too, by the time of the Crist poster, the poster format itself had acquired currency as a death threat for abortion providers. Gunn was killed after his poster was released; Britton was killed after his poster was released; and Patterson was killed after his poster was released.

[Neil Horsley claims no one was listed on the Nuremberg Files list until after they were attacked.]

Here's Rymer's summary: 7092/12, 3rd paragraph

We reheard the case en banc because these issues are obviously important. We now conclude that it was proper for the district court to adopt our long-standing law on "true threats" to define a "threat" for purposes of FACE. FACE itself requires that the threat of force be made with the intent to intimidate. Thus, the jury must have found that ACLA made statements to intimidate the physicians, reasonably foreseeing that physicians would interpret the statements as a serious expression of ACLA's intent to harm them because they provided reproductive health services. ...

7093/13 We are independently satisfied that to this limited extent, ACLA's conduct amounted to a true threat and is not protected speech

Threats are not the same as libel: 7099/19

Section II: (p 7098/18) discussion of why the court will review the facts (normally appeals courts don't) as to whether ACLA's conduct was a "true threat"

Section III (p 7105) ACLA claims its actions were "political speech" and not an incitement to imminent lawless action. Posters have no explicitly threatening language!

7106/26, end of 1st paragraph:

Further, ACLA submits that classic political speech cannot be converted into non-protected speech by a context of violence that includes the independent action of others.

This is a core problem: can context be taken into account? Can possible actions of others be taken into account?

Text of FACE law:

Whoever... by force or threat of force or by physical obstruction, intentionally injures, intimidates or interferes with or attempts to injure, intimidate or interfere with any person because that person is or has been [a provider of reproductive health services] [n]othing in this section shall be construed . . . to prohibit any expressive conduct ... protected from legal prohibition by the First Amendment

This subjects them to civil remedies, though perhaps not prior restraint.

The decision cited the following Supreme Court cases:

Brandenburg v Ohio, SCOTUS 1969: 1st amendment protects speech advocating violence, so long as the speech is not intended to produce "imminent lawless action" (key phrase introduced) and is not likely to produce such action.

This was an important case that strengthened and clarified the "clear and present danger" rule (speech can only be restricted in such situations) first spelled out in Schenck v US, 1919. Brandenburg introduced the "imminent lawless action" standard.

Clarence Brandenburg was a KKK leader who invited the press to his rally, at which he made a speech referring to the possibility of "revengeance" [sic] against certain groups. No specific attacks OR TARGETS were mentioned.

Robert Watts v United States, SCOTUS 1969.  Watts spoke at an anti-draft rally (actually a DuBois Club meeting):

"They always holler at us to get an education. And now I have already received my draft classification as 1-A and I have got to report for my physical this Monday coming. I am not going. If they ever make me carry a rifle the first man I want to get in my sights is L.B.J."

Watts' speech was held to be political hyperbole. This case overturned long precedent regarding threats.

Particular attention was given to NAACP v Claiborne. The crucial distinction: there was no actual violence then! The Supreme Court's decision was in effect that Evers' speeches did not incite illegal activity, and thus did not lead to business losses. No "true threat" determination was made nor needed to be made.

Also, Evers' overall tone was to call for non-violent actions such as social ostracism.

Here is another important case, decided by the Ninth Circuit, in which a threat was ruled legitimate (sort of)

Albert Roy v United States, Ninth Circuit, 1969: USMC private Roy heard that then-President Nixon was coming to the base, and said to a telephone operator "I hear the President is coming to the base. I am going to get him". Roy's conviction was upheld, despite his insistence that his statement had been a joke, and that he had promptly retracted it. This case was part of a move to a "reasonable person" test, eventually spelled out by the Ninth Circuit in its case United States v Orozco-Santillan, 1990:

Whether a particular statement may properly be considered to be a threat is governed by an objective standard -- whether a reasonable person would foresee that the statement would be interpreted by those to whom the maker communicates the statement as a serious expression of intent to harm or assault.

Note this "reasonable person" standard. No hiding behind "that's not really what we meant". In Roy's case, part of the issue was that the threat was reasonable enough to frighten the operator, and thus to affect the security preparations for the upcoming visit.



In the PP v ACLA decision, the Ninth Circuit wrote:

It is not necessary that the defendant intend to, or be able to carry out his threat; the only intent requirement for a true threat is that the defendant intentionally or knowingly communicate the threat.

[communicates it as a serious threat, that is, not just hyperbole]

ACLU amicus brief: The person must have intended to threaten or intimidate.

Rymer: this intent test is included in the language of FACE; ACLA has met this test long ago. Did ACLA intend to "intimidate"?

Two dissents argue that the speaker must "actually intend to carry out the threat, or be in control of those who will"

But Rymer argues that the court should stick with the "listener's reaction"; ie the reasonable-person standard again.

onclusion of this line of argument (intent v how it is heard):

7116/36, at [7] Therefore, we hold that "threat of force" in FACE means what our settled threats law says a true threat is: a statement which, in the entire context and under all the circumstances, a reasonable person would foresee would be interpreted by those to whom the statement is communicated as a serious expression of intent to inflict bodily harm upon that person. So defined, a threatening statement that violates FACE is unprotected under the First Amendment.

Crucial issue: the use of the strikeout and grey-out. This is what crosses the line.

7138/53, 2nd paragraph:

The posters are a true threat because, like Ryder trucks or burning crosses, they connote something they do not literally say, yet both the actor and the recipient get the message.

The Supreme court refused to hear the case. The Ninth Circuit had established that the speech in question met certain standards for being a true threat, and the ACLA would have had to argue that some factual interpretations were mistaken. But the Supreme Court does not generally decide cases about facts; they accept cases about significant or conflicting legal principles.

See also Baase, p 190, Exercise 3.23:

An anti-abortion Web site posts lists of doctors who perform abortions and judges and politicians who support abortion rights. It includes addresses and other personal information about some of the people. When doctors on the list were injured or murdered, the site reported the results. A suit to shut the site for inciting violence failed. A controversial appeals court decision found it to be a legal exercise of freedom of speech. The essential issue is the fine line between threats and protected speech, a difficult issue that predates the Internet. Does the fact that this is a Web site rather than a printed and mailed newsletter make a difference? What, if any, issues in this case relate to the impact of the Internet?

Finally, you might wonder why, with all the threats of violence made during the course of the civil rights movement by whites against blacks, the case NAACP v Claiborne that comes to us is an allegation of violence by blacks against blacks, filed by whites. I think it's safe to say that the answer has nothing to do with who made more threats, and everything to do with who could afford more lawyers.



Traditionally, it has been the "conservative" position that all threats of violence are to be taken seriously, and that maintenance of good social order often trumps individual rights. Similarly, it is traditionally the "liberal" position that in some cases about threats there is a legitimate Free Speech issue, and that our rights may often trump maintenance of the status quo. Note, however, that some have identified the Ninth Circuit's ruling here as another "liberal" opinion.