Computers and Speech
Communications Decency Act
Batzel v Cremers
More §230 Cases
Google in Italy
Planned Parenthood v ACLA
Germany and Hate Speech
LICRA v Yahoo
Yahoo v LICRA
Google vs Censorship
Illinois and Eavesdropping
Libel Suits Against Employees
Source Code as Speech
The Founding Fathers probably had political
speech in mind when drafting the First Amendment:
Congress shall make no law ... abridging the
freedom of speech, or of the press;
Right off the bat note the implicit distinction between "speech" and "the
press": blogging wasn't foreseen by the Founding Fathers!
For that matter, the Founding Fathers did not foresee "functional"
information: computer files, for example, that are in some ways
speech-like, but which have significant consequences when executed or
While in general laws tend towards a utilitarian justification, the right
of free speech is seen as, while not absolute, still pretty fundamental.
Specifically, speech may be restricted only if doing so is the least
restrictive means of accomplishing the desired end. In this
sense, freedom of speech under the US constitution can be seen as a
fundamental duty of the
government, more akin to deontological reasoning.
The courts have held that Congress can
abridge "offensive" speech. Here are a few examples (some of which no
Information about contraception used to be in the category of restricted
- whatever offends the government; examples [Baase 4e p 139 / 5e p 143:
China, France, Georgia] (note the jurisdiction problem
- sexual material that is indecent:
basis for several US laws; generally these regulations apply to keeping
material from minors.
- sexual material that is obscene,
and thus can be banned for everyone.
- child pornography (banned because its production is harmful to
- threats, "hate speech"
- seditious speech
- stock touting
- regulated speech
- wine selling
- investment discussion
- source code
- DRM circumvention (DMCA)
For a while, it was illegal for family-planning clinics receiving federal
funding to discuss abortion. Is this a speech restriction?
The following have been proposed as speech categories that should be banned:
Traditional categories for free speech categorization (Baase, 4e p 135 / 5e
- dieting (ok, not "conventional" dieting, but anorexia & bulimia)
- racial slurs
- sexual slurs
- Nazi materials
- alcohol advertising
- medical information
Where should commercial websites
fit? Where should personal websites
(including blogs) fit?
- Printed media (newspapers, magazines, books, leaflets, and maybe
- Broadcast media (TV & radio)
- common carriers (telephony, postal system, ISPs)
The Citizens United decision was nominally about campaign-finance
rules; Citizens United was a corporation that had produced a political film
and sought to show it. They won, and their case was widely interpreted as
giving free-speech rights to corporations. However, the text of the decision
addressed some fundamental shifts in freedom of the press since the rise of
The media exemption [allowing corporate free
speech for "media" companies] discloses further difficulties
with the law now under consideration. There is no precedent supporting
laws that attempt to distinguish between corporations which are deemed to
be exempt as media corporations and those which are not. "We have
consistently rejected the proposition that the institutional press has any
constitutional privilege beyond that of other speakers." .... With the
advent of the Internet and the decline of print and broadcast media,
moreover, the line between the media and others who wish to comment on
political and social issues becomes far more blurred.
Traditionally (actually, even more so now) the government regulates
broadcast TV and radio the most strongly. It is assumed that essentially all
content must be appropriate for minors (the practical issue is sexual
content; the other things are inappropriate for everybody and there's not as
much debate . Cable TV has somewhat greater latitude, but is still subject
to FCC regulation.
(The government has few if any rules about violence
on TV, though laws are occasionally introduced into Congress. The feds did bring the V-chip to every US
television; these are almost universally unused by consumers. Broadcasters
have their own rules about violence, however. )
Note that the list above addresses governmental
restrictions on free speech. There are also civil
restrictions: if you say something defamatory, you may be sued for libel.
Libel is perhaps the biggest issue for "ordinary" people, at least in terms
of creating speech: blogs,
websites, etc. Libel law creates:
- author liability
- publisher liability
- distributor liability
There is also harassing speech, including "cyber-bullying" and stalking.
Regulated classes of speech
All these categories are of speech that once upon a time was regulated,
but that once upon a time private individuals seldom if ever got caught up
Baase 4e p 152 / 5e p 158: the Commodity-Futures Trading Commission
(CFTC) required that, if you wrote about commodity futures, you needed a
license. The regulations were originally intended to cover traders, but
CFTC applied them to newsletters too, and then the web. (These latter
rules were deleted in 2000.)
New York State outlawed not only the direct sale
of wine from out-of-state-wineries to New Yorkers, but also the advertising.
What about web pages?
Political campaign-finance laws. Anything you do that is "coordinated"
with a political campaign is considered to be a contribution. These are
subject to limitations, and to reporting requirements.
Under the original terms of the McCain-Feingold act, you could not even
include a candidate's name or face in a newspaper article within 60 days
of an election.
In 2004, the Federal Election Commission was ordered by a judge to write
rules extending the McCain-Feingold rules to the Internet. How would this
affect bloggers? Would they be silenced? Note that the opposing candidates
are VERY likely to file complaints.
The FEC issued these new Internet rules in 2006, deciding that blogging
about candidates was ok as long as you weren't paid, even if the
blogging was "in coordination with" the candidate.
2007: The Supreme court struck down the McCain-Feingold restriction on
2010: Supreme Court struck down most restrictions on corporate speech
2014: Supreme Court struck down most campaign-donation caps on First
Amendment grounds (McCutcheon
Home selling: if you list your house online, do you need a real-estate
What are we going to look at?
- Everyone as a blogger (and suppression of certain sites)
- Blogger liability
- Source code
- DMCA restrictions on circumvention speech
- LICRA v Yahoo as a jurisdictional infringement of free speech
Sexual material , including pornography
(though that is a pejorative term) has been regulated for a long time.
Miller v California, Supreme Court 1973 (unrelated to US v Miller 1976):
this case established a three-part guideline for determining when
something was legally obscene (as opposed to merely "indecent"):
- must be against the law (eg local law)
- must be against community standards
- must have no redeeming artistic, etc merit
For the internet, community standards
is the problem: what community?
This is in fact a huge problem,
though it was already a problem with mail-order.
As the Internet became more popular with "ordinary" users, there was
mounting concern that it was not "child-friendly". This led to the
Communications Decency Act (CDA), next.
Communications Decency Act
In 1996 Congress passed the Communications Decency Act (CDA) (Baase 4e p
141 / 5e p 146). It was extremely broad.
From the CDA:
[it is forbidden to be someone who] uses any
interactive computer service to display in a manner available to a person
under 18 years of age, any comment, request, suggestion, proposal, image,
or other communication that, in context, depicts or describes, in terms
patently offensive as measured by contemporary community
On the internet, you cannot tell how old someone is.
Butler v Michigan, 1957: SCOTUS struck down a law making it illegal to
sell material (pornography) in Michigan solely because it might be harmful
The CDA was widely viewed as an attempt by Congress to curry favor with
a "Concerned Public", but knowing full well it was unlikely to withstand
It did not. The Supreme Court ruled unanimously in 1997 that the
censorship provisions were not ok: they were too vague and did not use the
"least-restrictive means" available to achieve the desired goal.
The Child Online Protection Act (COPA) was passed in 1998: This still
stuck bewith the "community standards" rule. The law also authorized the
creation of a commission; this was the agency that later wanted some of
Google's query data. The bulk of COPA was struck down.
The Children's Internet Protection Act (CIPA) was passed in 2000 (Baase,
4e p 142 / 5e p 147) Schools that want federal funding have to install
filters. So do public libraries; however, libraries must honor patron
requests to turn the filter off.
SCOTUS upheld CIPA in 2003.
The Chicago Public Library gave up on filters, but did install screen
covers that make it very hard for someone to see what's on your screen.
This both protects patron privacy AND protects library staff from what
might otherwise be a "hostile work environment".
Baase has more on the library situation, 4e p 143/ 5e p 147
Filters are sort of a joke, though they've gotten better. However, they
CANNOT do what they claim. They pretty much have to block translation
sites and all "personal" sites, as those can be used for redirection; note
that many sites of prospective congressional candidates are of this type.
See peacefire.org. And stupidcensorship.com.
This is merely at the technical level. A more insidious problem with
filtering is a frequent conservative bias. Peacefire documents an incident
in 2000 where a web page with quotes about homosexuality was blocked. All
the quotes, however, came from the websites of conservative
anti-homosexuality sites, which were not blocked. At a minimum, one can
expect that (as with the MPAA) the blocking threshold for information on
homosexual sexuality will be lower than for heterosexual sexuality. (To be
fair, some conservative sites, such as some sites relating to the Second
Amendment, have also had trouble with blocks.)
Demo: use http://mousematrix.com/
to get to thepiratebay.org. (Or the Tor browser, but that's harder to set
To a degree, problems with over-filtering in K-12 schools are
not serious: student use of computers is fundamentally intended to support
their education, and if a site needed for coursework is blocked then it is
In the UK the rule is now to block all Internet pornography for
residential customers, unless the customer has opted in. Other material --
not clearly specified -- is inevitably also blocked. The theory is that the
majority of customers will be unwilling to choose "allow pornography" as an
option, even if other material might be blocked too.
Note that filtering providers generally go to great lengths to keep their
list of blocked sites secret.
Batzel v Cremers
One piece of the CDA survived: §230:
or user of an interactive computer service shall be treated as the
publisher or speaker of any information provided by another information
content provider. [Wikipedia]
Why is this there?
Note that there is no limit of §230 to any particular area of law, eg
libel. (Actually, there are
limits if the issue is copyright law, or criminal law.)
Note also that §230 addresses "publisher" liability and "author"
liability. Another form, not
exempted, is "distributor" liability.
The actual law is here: http://www.law.cornell.edu/uscode/47/usc_sec_47_00000230----000-.html.
Note in particular the exemption sections (e)(1) and (e)(2). Note also
that section 230 is titled "Protection for private blocking and
screening of offensive material".
History of this as applies to protecting minors from offensive material
Cubby v CompuServe: 1991
District court only, New York State (Does anyone remember CompuServe?)
Giant pre-Internet BBS available to paid subscribers. The "rumorville"
section, part of the Journalism Forum, was run by an independent company,
Don Fitzpatrick Associates. Their contract guaranteed DFA had "total
responsibility for the contents". Rumorville was in essence an online
newspaper; essentially it was an expanded gossip column about the
journalism industry. I have no idea who paid whom for the right to be
present on CompuServe.
In 1990, Cubby Inc and Robert Blanchard planned to start a competing
online product, Skuttlebut. This was disparaged in Rumorville. Cubby et al
sued DFA & CompuServe for libel.
CompuServe argued they were only a distributor. The judge agreed that
this meant they escaped liability, and granted summary judgement in their
favor. The court ruled that they had no control at all over content. They
are like a bookstore, or a distributor.
While CompuServe may decline to carry a given
publication altogether, in reality, once it does decide to carry a
publication, it will have little or no editorial control over that
publication's contents. This is especially so when CompuServe carries the
publication as part of a forum that is managed by a company unrelated to
CompuServe has no more editorial control over
such a publication than does a public library, book store, or newsstand,
and it would be no more feasible for CompuServe to examine every
publication it carries for potentially defamatory statements than it would
be for any other distributor to do so.
It was and is generally accepted that distributors have no liability for
content (unless it can be proven that they encouraged the content).
(we'll come back to "distributor liability" later.)
Stratton Oakmont v Prodigy: New
York state court, 1995. On a
financial matters forum called "Money Talk," a Prodigy user (never
identified) posted about Daniel Porush, the president of Stratton Oakmont,
a financial services company. The remarks called Porush a "soon to be
proven criminal" and that Stratton Oakmont was a "cult of brokers who
either lie for a living or get fired"
Prodigy claimed the Compuserve defense in their motion for summary
Prodigy lost, because they
promised to monitor for bad behavior on the board. At the very least, they
CLAIMED to the public that they reserved the right to edit or remove
messages. This was in fact part of Prodigy's family-oriented marketing.
Prodigy was trying to do "family values" editing (including the deletion
of profanity), and it cost them.
In legal terms, Prodigy was held to "publisher liability" rather than
the weaker "distributor liability" because they CLAIMED to exercise
Prodigy did have some internal confusion about whether they were for the
"free expression of ideas" or were "family safe"
Prodigy's policy was to ban individual attacks, but not group attacks;
anti-semitic rants did appear and were not taken down.
After Prodigy lost their motion for summary judgement, the case was
settled; Prodigy issued a public apology. In Wall
Street versus America by Gary Weiss, the claim is made that the
settlement did not involve the exchange of money. See http://books.google.com/books?id=iOhGkYqaEdwC&pg=PA213&lpg=PA213&dq=wall+street+versus+america+porush&source=b...t=result,
page 215: "No money changed hands. No money had to change hands."
Weiss also points out that four years later
... Porush and his partners were all carted
off to federal prison. In 1999, Porush and other Stratton execs pleaded
guilty to securities fraud and money laundering for manipulating a bunch
of Stratton IPOs... Stratton really was
a den of thieves. Porush really was
a criminal. [italics in original - pld]
The film The
Wolf of Wall Street is based on Stratton Oakmont. The character Danny
Azoff is based on Daniel Porush, and is played by Jonah Hill.
Enter the CDA. §230 was intended to encourage
family-values editing, because after the Stratton Oakmont case most
providers were afraid to step in.
Whether this was specifically to encourage providers to remove profanity
& obscenity, the nominal targets of the CDA, or whether it was just a
compensatory free-speech-positive clause in an overall free-speech-
very-negative law is not clear.
Most of Congress apparently did not expect the CDA to withstand judicial
Congressional documents suggest that fixing the Stratton Oakmont
precedent was the primary purpose of §230. However, arguably the reason
for fixing Stratton Oakmont was to protect ISPs and websites that did
try to provide a "family-friendly" environment.
Batzel v Cremers summary
Robert Smith was a handyman who worked for Ellen Batzel at her North
Carolina home, doing repairs to house and vehicles, in 1999. Batzel's house
was filled with large paintings in old frames that looked European.
- Batzel told him that she was "the granddaughter of one of Hitler's
- He overheard Batzel tell someone that she was related to Heinrich
Himmler (or else this was part of conversation #1)
- He was told by Batzel the paintings were "inherited"
Smith developed the theory that the paintings were artwork stolen by the
Nazis and inherited by Batzel.
Smith had a dispute with Batzel [either about payments for work, or about
Batzel's refusal to use her Hollywood contacts to help Smith sell his
movie script]. It is not clear to what extent this dispute influenced
Smith's artwork theory.
Smith sent his allegations about Batzel in an email to Ton Cremers, who
ran a stolen-art mailing list. Smith found Cremers through a search
engine. This is still 1999.
Smith claimed in his email that some of Batzel's paintings were likely
stolen by the Nazis. (p 8432 of the decision, Absolute Page 5)
Smith sent the email to firstname.lastname@example.org
Cremers ran a moderated listserv specializing in this. He included
Smith's email in his next release. Cremers exercised editorial control
both by deciding inclusion and also by editing the text as necessary.
He included a note that the FBI had been notified.
Normal address for Cremer's list was: email@example.com
Smith's emailed reply to someone when he found out he was on the list:
I [was] trying to figure out how in blazes I
could have posted me [sic] email to [the Network] bulletin board. I came
into MSN through the back door, directed by a search engine, and never got
the big picture. I don't remember reading anything about a message board
either so I am a bit confused over how it could happen. Every message
board to which I have ever subscribed required application, a password,
and/or registration, and the instructions explained this is necessary to
keep out the advertisers, cranks, and bumbling idiots like me.
Some months later, Batzel found out and contacted Cremers, who contacted
Smith, who continued to claim that what he said was true. However, he did
say that he had not intended his message for posting.
On hearing that, Cremers did apologize to Smith.
Batzel disputed having any familial relationship to any Nazis, and
stated the artwork was not inherited.
Batzel sued in California state court:
- Smith (who has no money)
- Netherlands Museum Association
- Mosler, Inc. They were included because they were a sponsor of the
Cremers filed in Federal District Court for:
- summary judgement under anti-SLAPP rules (Strategic Lawsuit Against
- motion to dismiss for lack of jurisdiction
- §230 immunity
He lost on all three counts. (Should he have? We'll return to the
jurisdiction one later. Jurisdiction is a huge issue in libel law!). The
district court judge ruled that Cremers was not an ISP and so could not
claim §230 immunity.
Cremers then appealed the federal issues (anti-SLAPP, jurisdiction, §230)
to the Ninth Circuit, which simply ruled that §230 meant Batzel had no
case. (Well, there was one factual determination left for the District
Court, which then ruled on that point in Cremers' favor.)
This was the §230 case that set the (famous) precedent. This is a
major case in which both Congress and the courts purport to "get it" about
the internet. But note that there was a steady evolution:
- the law Congress intended
- the law Congress actually wrote down
- how the Ninth Circuit interpreted the law
IS Cremers like an ISP here? The
fact that he is editing the list he sends out sure gives him an active
role, and yet it was Prodigy's active-editing role that the CDA §230 was
arguably intended to protect.
Cremers is an individual, of course, while Prodigy was a huge
corporation. Did Congress mean to give special protections to corporations
but not individuals?
Cremers was interested in the content on his list, but he did not create
much if any of it.
Prodigy was interested in editing to create "family friendliness".
Cremers edited basically to tighten up the reports that came in.
Why does Communications Decency
Act have such a strong free-speech component? Generally free speech is
something the indecent
are in favor of.
The appellate case was heard by the Ninth Circuit (Federal Appellate
court in CA, other western states); a copy is at BatzelvCremers.pdf.
numbers in the sequal are as_printed/relative).
[Opening (8431/4)] There is no reason inherent
in the technological features of cyberspace why First Amendment and
defamation law should apply differently in cyberspace than in the brick
and mortar world. Congress, however, has
chosen for policy reasons to immunize from liability for
defamatory or obscene speech "providers and users of interactive computer
services" when the defamatory or obscene material is "provided" by someone
Note the up-front recognition that this is due to Congress.
Section 230 was first offered as an amendment
by Representatives Christopher Cox (R-Cal.) and Ron Wyden (D-Ore.).
Congress made this legislative choice for two primary reasons. First,
Congress wanted to encourage the unfettered and unregulated development of
free speech on the Internet, and to promote the development of e-commerce.
(Top of 8445/18) The second
reason for enacting § 230(c) was to encourage interactive computer
services and users of such services to self-police the Internet for
obscenity and other offensive material
[extensive references to congressional record]
(8447/20): In particular, Congress adopted §
230(c) to overrule the decision of a New York state court in Stratton
Regarding question of why a pro-free-speech clause was included in an
anti-free-speech law (or, more precisely, addressing the suggestion that
§230 shouldn't be interpreted as broadly pro-free-speech simply because
the overall law was anti-free-speech):
(8445/18, end of 1st paragraph): Tension
within statutes is often not a defect but an indication that the
legislature was doing its job.
8448/21, start of section 2. To benefit
from § 230(c) immunity, Cremers must first demonstrate that his Network
website and listserv qualify as "provider[s] or user[s] of an interactive
The District court limited this to ISPs [what are they?]. The Circuit
court argued that (a) Cremers was
a provider of a computer service, and (b) that didn't matter because he
was unquestionably a user.
But could user have been
intended to mean one of the army of Prodigy volunteers who kept lookout
for inappropriate content? It would do no good to indemnify Prodigy the
corporation if liability then simply fell on the volunteer administrators
of Prodigy's editing system. Why would §230 simply say "or user" when what
was meant was a specific user who was distributing content?
8450/23, at  Critically, however, § 230
limits immunity to information "provided by another information content
Here's one question: was Smith
"another content provider"? You can link and host all you want, provided
others have created the material for
online use. But if Smith wasn't a content provider, then Cremers
becomes the originator.
The other question is whether Cremers was in fact partly the "provider",
by virtue of his editing. Note, though, that the
whole point of §230 is to allow (family-friendly) editing. So
clearly a little editing cannot be enough to void the immunity.
Here's the Ninth Circuit's answer to whether Cremers was the content
provider [emphasis added]:
8450/23, 3rd paragraph: Obviously,
Cremers did not create Smith's e-mail. Smith composed the e-mail
entirely on his own. Nor do Cremers's minor alterations of
Smith's e-mail prior to its posting or his choice to publish the e-mail
(while rejecting other e-mails for inclusion in the listserv) rise to the
level of "development."
More generally, the idea here is that there is simply no way to extend
immunity to Stratton-Oakmont-type editing, or to removing profanity, while
failing to extend immunity "all the way".
Is that actually
The Court considers some other partial interpretations of §230, but finds
they are unworkable.
8584/27, 3rd paragraph Smith's confusion,
even if legitimate, does not matter, Cremers maintains, because the
§230(c)(1) immunity should be available simply
because Smith was the author of the e-mail, without more. We disagree. Under Cremers's broad
interpretation of §230(c), users and providers of interactive computer
services could with impunity intentionally post material they knew was
never meant to be put on the Internet. At the same time, the creator or
developer of the information presumably could not be held liable for
unforeseeable publication of his material to huge numbers of people with
whom he had no intention to communicate. The result would be nearly
limitless immunity for speech never meant to be broadcast over the
Internet. [emphasis added]
The case was sent back to district court to determine this point (which
it did, in Cremer's favor).
8457/30, at  We therefore ... remand to
the district court for further proceedings to develop the facts under this
newly announced standard and to evaluate what Cremers should have
reasonably concluded at the time he received Smith's e-mail. If Cremers
should have reasonably concluded, for example, that because
Smith's e-mail arrived via a different e-mail address it was not
provided to him for possible posting on the listserv, then
Cremers cannot take advantage of the §230(c) immunities.
Judge Gould partial dissent in Batzel v Cremers:
The majority gives the phrase "information
provided by another" an incorrect and unworkable meaning that extends CDA
immunity far beyond what Congress intended.
(1) the defendant must be a provider or user
of an "interactive computer service"; (2) the asserted claims must treat
the defendant as a publisher or speaker of information; and (3) the
challenged communication must be "information provided by another
information content provider."2 The majority and I agree on the importance
of the CDA and on the proper interpretation of the first and second
elements. We disagree only over the third element.3
Majority: part (3) is met if the defendant believes this was the
author's intention. Gould: This
is convoluted! Why does the author's intention
Below, when we get to threatening speech, we will see that the issue
there is not the author's
intention so much as a reasonable recipient's
The problems caused by the majority's rule
would all vanish if we focused our inquiry not on the author's [Smith's]
intent, but on the defendant's [Cremers'] acts
[pld: emphasis added here and in sequel]
So far so good. But then Gould shifts direction radically:
We should hold that the CDA immunizes a
defendant only when the defendant took no
active role in selecting the questionable information for publication.
How does this help Prodigy with family-friendly editing or
Stratton-Oakmont non-editing? Why not interpret (3) so the defendant is
immunized if the author did
intend publication on internet? Though this interpretation wouldn't have
much impact on later §230 cases; it is almost always the case that the
author did intent internet publication.
Can you interpret §230 so as to (a) restrict protection to cases when
there was no active role in selection, and (b) solve the Stratton Oakmont
Gould: A person's decision to select
particular information for distribution on the Internet changes that
information in a subtle but important way: it adds the person's imprimatur
No doubt about that part. But Congress said that chat rooms, discussion
boards, and listservs do have special needs.
And why then add the "and users" lanuage to the bill? These aren't users.
Gould: If Cremers made a mistake, we should
not hold that he may escape all accountability just because he made that
mistake on the Internet.
Did Congress decide to differ here?
The (potential) corporate liability for sexual harassment is perhaps the
most frequently cited justification for lack of employee privacy regarding
Should this liability be there, in light of §230? Does §230 mean that a
company cannot be found liable as publisher or speaker for email created
Arguably, the main issue here is a "hostile work environment", which is a
none-of-the-above in terms of publisher, author, or distributor liability.
This is an important point regarding the extent of §230 immunity. Companies
are not being found liable as publisher or author, but rather for
"tolerating" the authorship. (Still, the harassment generally has to be
Since this case, there have been MANY others decided by application of
this decision. See eff.org's section on Free Speech, http://www.eff.org/issues/free-speech.
There have also been many attacks on §230 immunity. The 2015 SAVE act is
one legislative approach. Other limitations may come, someday.
Publisher liability (except when eliminated by §230) exists even without
knowledge of defamatory material's inclusion.
Distributor liability is not exempted by §230. It is liability for knowingly distributing defamatory
material. However, in Zeran v AOL (below), the courts found that prior
notice doesn't automatically make for distributor liability.
Currently, the most likely approaches to attacking §230 immunity seem to
be to claim distributor liability, or to claim that the hosting site
actively contributed to the defamation or actively encouraged defamatory
Is there another interpretation of §230 that is more conservative?
1. Limiting protection to genuine ISP-like services (perhaps run by
individuals). But the law has the phrase "or user"; is that consistent?
2. Limiting protection where the provider does not actively select material,
but only removes material posted by others. This might have been what some
in Congress had in mind, but is it workable?
[A Lot] More §230 Cases
There have been attacks on the §230 defense, but courts have been unwilling
to date to allow exceptions, or to restrict coverage to "traditional ISPs"
where there is zero role in selection of the other material being
There is still some question though about what happens if you do
actively select the material. Cremers played a very limited editorial
role. What if you go looking for criticism of someone and simply quote all
that? And what if you're a respected blogger and the original sources were
just Usenet bigmouths?
EFF: One court has limited §230 immunity to situations in which the
originator "furnished it to the provider or user under circumstances in
which a reasonable person...would conclude that the information was
provided for publication on the Internet...."
Be wary, too, of editing that changes the meaning. Simply deleting some
statements that you thought were irrelevant but which the plaintiff
thought were mitigating could get you in trouble!
Zeran v AOL
This was a §230 case that expanded the rules to include at least some
distributor liability. The ruling was by the Fourth Circuit.
Someone posted a fake ad for T-shirts with tasteless slogans related to
the Oklahoma City bombing, listing Kenneth Zeran's home number. Zeran had
nothing to do with the post (although it is not clear whether the actual
poster used Zeran's phone intentionally). For a while Zeran was getting
hostile, threatening phone calls at the rate of 30 per hour.
Zeran lost his initial lawsuit against AOL.
Zeran appealed to the 4th circuit, arguing that §230 leaves intact
"distributor" liability for interactive computer service providers who
possess notice of defamatory
material posted through their services.
Publisher liability: liability even without knowledge of defamatory
Distributor liability: liability for knowingly distributing defamatory
Zeran argued that AOL had distributor
liability once he notified them of the defamatory material.
Zeran lost. In part because he "fails to understand the practical
implications of notice liabililty in the
interactive-computer-service context"; note that the court here once again
tried to understand the reality of the internet. The court also apparently
felt that AOL was still acting more as publisher than distributor, at
least as far as §230 was concerned.
What if I quote other defamatory speakers on my blog in order to "prove
my point"? Batzel v Cremers doesn't entirely settle this; it's pretty much
agreed Cremers did not intend to
defame Batzel. The Barrett v Rosenthal case (next) did to an extent
address this situation, though.
There's also the distributor-liability issue left only partly settled in
Barrett v. Rosenthal, Nov. 20, 2006:
California supreme court affirms core §230 ruling
The case was brought by doctors Stephen Barrett and Tim Polevoy against
Ilena Rosenthal, who posted statements on an alternative-medicine
newsgroup about the doctors. Barrett and Polevoy operated a website aimed
at exposing fraud in alternative medicine. The statements posted by
Rosenthal originated with Tim Bolen, an alternative-medicine activist,
that included accusations that Dr Polevoy engaged in "stalking" in order
to prevent the broadcast of a pro-alternative-medicine TV show.
Dr Barrett sued Rosenthal, arguing that Rosenthal bore distributor
liability for re-circulating Bolen's remarks. Barrett had warned Rosenthal
about the statement, thus meeting the "notice" requirement for distributor
In the case before the California Supreme
Court, the doctor [Barrett] claimed that by warning Rosenthal that Bolen's
article was defamatory, she "knew or had reason to know" that there was
defamatory content in the publication. Under traditional distributor
liability law, therefore, Rosenthal should therefore be responsible for
the substance of Bolen's statements, the doctor claimed. The court
rejected the doctor's interpretation, saying that
the statute rejects the
traditional distinction between publishers and distributors, and
shields any provider or user who republishes information online. The court
acknowledged that such "broad immunity for defamatory republications on
the Internet has some troubling consequences," but it concluded that
plaintiffs who allege "they were defamed in an Internet posting may only
seek recovery from the original source of the statement."
Barrett could still sue Bolen. But Bolen might not have had any money,
and Barrett would have to prove that Bolen's original email, as
distributed by Bolen, was defamatory. If Bolen sent it privately,
or with limited circulation, that might be difficult.
See also wikipedia article http://en.wikipedia.org/wiki/Barrett_v._Rosenthal
Rosenthal was arguably even more of an Ordinary User than Ton Cremers.
Rosenthal may very well, however, have chosen Bolen's statement
specifically because it portrayed Dr Barrett negatively. Cremers had no
Jane Doe v MySpace: §230 applies to
liability re physical harm
Jane Doe acting on behalf of Julie Doe, her minor daughter She was 13
when she created a myspace page, 14 when she went on a date with someone
age 19 who then assaulted her. On the face of it, Doe claims that the suit
is about MySpace failing to protect children, or for failing to do
SOMETHING. But the court held that it's really about lack of liability for
Julie Doe's posting. Note that this isn't libel
law at all. The court argued that:
It is quite obvious that the underlying basis
of Plaintiff's claims is that, through postings on MySpace, *** and Julie
Doe met and exchanged personal information which eventually led to ... the
Therefore the case is in fact about publication, and therefore MySpace
is immune under §230.
Similar case (Doe v Bates): Yahoo was sued
because someone posted child pornography on a yahoo group. (Note that Yahoo
here is a traditional ISP). ("Doe"
represented the anonymized parents of an alleged child victim.)
Here's a §230 case from http://www.entrepreneur.com/tradejournals/article/189703316_3.html
[dead link? Here is a link to a larger article: http://www.thefreelibrary.com/Combating+sexual+predators+online+and+conflicts+with+free+speech%3A+an...-a0189703316]
dealing with websites that allowed anonymous postings:
In Donato [v Moldow], two
members of the Emerson Borough Council [New Jersey] sued a Web site
operator and numerous individuals after they used pseudonyms when posting
on the Web site for "defamation, harassment, and intentional infliction of
emotional distress." (74) The appellants argued that Stephen Moldow, the
website operator, was liable for the damages because he was the publisher
of the website. (75) Much to their chagrin, the trial judge found that
Moldow was immune from liability under the Communications Decency Act,
(76) and the appellate court agreed. (77) The court reasoned that:
The allegation that the anonymous format
encourages defamatory and otherwise objectionable messages 'because users
may state their innermost thoughts and vicious statements free from civil
recourse by their victims' does not pierce the immunity for two reasons:
(1) the allegation is an unfounded conclusory statement, not a statement
of fact; and (2) the allegation misstates the law; the anonymous posters
are not immune from liability, and procedures are available, upon a proper
showing, to ascertain their identities. (78)
Note that Moldow was merely the operator here; he was not doing anything to select content.
In 2013, Illinois State Senator Ira Silverstein proposed the Internet
Posting Removal Act, which
provides that a web site administrator
shall, upon request, remove any posted comments posted by an anonymous
poster unless the anonymous poster agrees to attach his or her name to
the post and confirms that his or her IP address, legal name, and home
address are accurate.
The request here can be filed by anyone.
While parts of this strategy make sense, and while sometimes anonymous
postings can get quite ugly, the Supreme Court has long upheld the idea
that we have a right to anonymous speech. But not defamatory
speech. Here is an EFF link: https://www.eff.org/node/58343
about the case.
And on the subject of cranky anonymous postings, Nathan Matias claims at https://blog.coralproject.net/the-real-name-fallacy/
that anonymity is not the problem, and that posting anonymously
is one of the best ways for women and people with "ethnic" names to be
taken seriously. Matias also points out that the majority of online
harassment is not anonymous, and there are specific settings in which
people behaved quite well anonymously.
December 2011 the Freeport Journal Standard published an article about
William Hadley, then a candidate (ultimately successful) for the
Stephenson County Board. An anonymous comment was left on the online
version of the story by user "Fuboy" suggesting that
Hadley was a child molester: "Hadley is a Sandusky waiting to be
Hadley sued the paper and its parent corporation for defamation. That
suit was settled within a few months, with the paper turning over to
Hadley the IP address from which Fuboy's comment was posted.
Hadley then asked Comcast for the corresponding subscriber information.
Comcast refused without a court order. Hadley then filed a subpoena for
the account information, at which point Comcast informed Fuboy. Fuboy
hired an attorney to represent Fuboy anonymously and attempt to quash the
The lower court and Illinois appellate court both upheld Hadley's
subpoena, overruling Fuboy's argument that the speech was not defamatory
because (a) Hadley was a public official, and actual malice was not
evident, (b) everyone posts like this on newspaper sites, so defamation
cannot be inferred, and (c) the post could have referred to anyone
June 18, 2015, the Illinois Supreme Court also upheld the subpoena,
handing down an order requiring that Comcast reveal Fuboy's identity.
Note that Fuboy went to considerable expense to block the release of his
identity. He turned out to be Frank Cook, a part-time Stephenson County
The actual libel case was supposed to go to trial in December 2015, but
I've heard nothing. My guess is that the case was settled.
Hadley claims to have spent about $35,000 on finding Fuboy's identity.
Here's a 2009 discussion of whether it is time to rein in §230: http://arstechnica.com/tech-policy/news/2009/03/a-friendly-exchange-about-the-future-of-online-liability.ars.
The participants are Adam Thierer of the Progress & Freedom Foundation
and John Palfrey of Harvard law School. Palfrey believes §230 needs to be
modified is cases like Jane Doe v MySpace, where Doe's daughter was
assaulted due to material published on MySpace (specifically, due to email
exchanges between Doe's daughter and the perpetrator). Palfrey believes that
such cases should be heard by the
courts, but that the steps MySpace took to protect minors would be taken
into consideration. Note that Palfrey apparently believes in the fairness
and appropriateness of the legal system; many ISPs, on the other hand, don't
agree and would do just about whatever it took to make sure cases never
At the bottom of the last page, Palfrey suggests some alternatives for §230.
Here's an example of §230 being used to defend event-ticket resellers; the
claim is that the sites in question are essentially just auction sites, and
that the actual reseller was the person who offered their ticket online.
Note that §230 grants immunity without
requiring any balancing obligations. There is no "takedown"
requirement for "internet providers and users" to remove defamatory content
on request, as there is for example in OCILLA (the DMCA version). There is
not even a requirement that the internet provider/user cooperate with an
investigation of the alleged defamation.
Here's a 2017 discussion: economist.com/news/business/21716661-platforms-have-benefited-greatly-special-legal-and-regulatory-treatment-internet-firms.
Summary: the party cannot last forever.
Chicago Lawyers v Craigslist, 2006-2008
Craigslist has also been sued for posting housing ads that contained
discriminatory language (from "no minorities" to "clean godly Christian
male" to "no children"); most (all?) of these cases were also set aside on
§230 grounds. One case was brought in 2006 by the Chicago Lawyers' Committee
for Civil Rights; the Seventh Circuit (in Chicago) ruled that §230 protected
Sort of. The Seventh Circuit's decision included analysis that might be
unnecessary if a strict §230 protection was applied. The Court noted that
there were 30 million Craigslist posts a month, and that "fewer than 30
people ... operate the system". They did note that "Neither side's argument
finds much support in the statutory text" of §230; the Seventh Circuit is
clearly unhappy with the interpretation of §230 as a form of immunity.
Instead, they suggest "[w]hy not read §230(c)(1) as a definitional clause
rather than as an immunity from liability", that is, it declares that an ISP
is not an author and not a publisher. They also hinted that full §230
protection might apply only to those who did some content monitoring (which
completely reverses the idea that Compuserve was automatically not liable
but Prodigy might be).
Still, at the end of the day the Seventh Circuit went along with the usual
interpretation of §230:
What §230(c)(1) says is that an online
information system must not "be treated as the publisher or speaker of any
information provided by" someone else. Yet only in a capacity as publisher
could craigslist be liable under [the Fair Housing Act].
It is possible that the Seventh Circuit would be less inclined to reject
online distributor liability than the Fourth or Ninth Circuits.
In 2008, the Ninth Circuit ruled en banc in the case Fair Housing
Council v Roommates.com Inc that roommates.com was not entitled to §230
immunity for housing discrimination on their site because they actively
encouraged it. The online roommates.com questionnaire included several
illegal questions (eg about race), and you had to answer them in order to
complete your housing application.
Dart v Craigslist, 2009
Cook County Sheriff Tom Dart sued Craigslist for hosting advertisements for
prostitution. Dart also claimed that Craigslist took on a more active role
than simply publishing by virtue of maintaining categories like "adult
services" and "w4m" (women for men), and by providing a search function that
might enable users to search for ads that used codewords for prostitution
(eg "roses" or "diamonds" as stand-ins for "dollars").
What do you think of the idea that providing a search function causes a site
to lose §230 immunity? How would this affect Google?
Craigslist ads are generally free. However, they began charging in 2008 for
adult-services ads, at least in part because payment would make it difficult
or impossible for posters to remain anonymous (bitcoin notwithstanding).
Overall, it seems Craigslist was not happy with these ads, but did not
implement an "individual review" policy until 2011.
The CDA does include the following:
Nothing in this section shall be construed to
prevent any State from enforcing any State law that is consistent with
this section. No cause of action may be brought and no liability may be
imposed under any State or local law that is inconsistent with this
The first sentence here suggests that the CDA is not intended to interfere
with state laws against prostitution. The second, however, suggests that
§230 protections are indeed intended to apply to online speech cases that
may run afoul of state laws.
Prostitution is a violation of state law, of course, and Dart's complaint
stated that Craigslist itself was
"solicit[ing] for a prostitute", under the broader meaning of "soliciting".
Craigslist, Dart claimed, was also "knowingly assisting" others in finding
prostitutes, also against state law. The federal district court did not buy
this argument. From the decision of Judge John Grady:
"Facilitating" and "assisting" encompass a
broader range of conduct, so broad in fact that they include the services
provided by intermediaries like phone companies, ISPs, and computer
manufacturers. Intermediaries are not culpable for "aiding and abetting"
their customers who misuse their services to commit unlawful acts. [p 14]
The court did however point to the fact that Craigslist specifically and
repeatedly warned users not to post prostitution ads or other illegal ads.
Should they have to include such warnings?
The court also made reference to Does v GTE, in which a previous court ruled
that it was inconsistent with the statue's
apparent purpose to encourage monitoring ("Protection for 'good Samaritan'
blocking and screening of offensive material") to read §230(c)(1) to
immunize internet-serice providers who
do nothing to monitor the content they make available to the public
[emphasis added by pld; from Dart v Craigslist p 12]
What do you think of that potential §230 limitation: that to receive §230
protection you must do at least some content monitoring? If you don't, you
can maybe fall back on the Cubby v Compuserve defense that you have only distributor liability. Should the
"distributor" classification apply to a site like Craigslist if they did no
Jones v The Dirty
The site thedirty.com publishes trashy
information about people. The site is run by Hooman Karamian, known on the
site as Nik Richie. The site actively encourages the submission of
scandalous information, and Nik frequently comments on submitters' postings.
In 2009 the site included a post about Sarah Jones, then a cheerleader for
the Cincinnati Bengals and a high-school teacher:
Nik, this is Sara J, Cincinnati
Bengal[sic] Cheerleader. She's been spotted around town lately with the
infamous Shayne Graham. She also has slept with every other Bengal
Football player. This girl is a teacher too!
A couple months later another post appeared.
Jones sued for defamation of character. The district-court judge, William
Bertelsman, denied a motion to dismiss the case on §230 grounds; in July
2013 a jury found in Jones' favor. Jury instruction #3 read
Defendants, when they re-publish the matters
in evidence, had the same duties and liabilities for re-publishing
libelous material as the author of such materials.
§230 be damned, in other words.
Just to prove once again that life is stranger than fiction, Jones pled
guilty in October 2012 to a felony charge of sex with a student (in fact
this may have delayed her libel case). Jones later became engaged to the
Bertelsman's argument against §230 immunity in his original decision was
based on the following:
Recall the Roommates.com case mentioned above in which the website was an
active participant in the racial profiling of tenants.
- the site was engaged in "intentional development" of defamatory
- Richie added additional snarky comments
In 2009 the Tenth Circuit ruled, in Federal Trade Commission v
Accusearch, that a site could not claim §230 immunity if it was
"responsible for the development of the specific content that was the source
of the alleged liability", and that the only way to escape this
responsibility was to be completely "neutral with respect to the
offensiveness of the content". In that case, Accusearch was in the business
of selling private phone records. §230 immunity seemed a real stretch, but
the Tenth Circuit's interpretation is very broad (much broader, for example,
than the reasoning in roommates.com). See Vision Security v
Mr Richie was far from neutral. The whole purpose of thedirty.com is to
attract salacious gossip. Still, his editorial remarks did not generally
claim that the stories posted were definitely true.
In June 2014 the Sixth Circuit issued its ruling, vacating Jones' victory
and upholding §230:
Under the CDA, Richie and Dirty World were
neither the creators nor the developers of the challenged defamatory
content that was published on the website. Jones's tort claims are
grounded on the statements of another content provider yet seek to impose
liability on Dirty World and Richie as if they were the publishers or
speakers of those statements. Section 230(c)(1) therefore bars Jones's
Nik's comments, in other words, weren't enough for him to incur liability.
Cloudflare and Grooveshark
In April 2015 the music-file-sharing site Grooveshark was shut down due to
record-company litigation; part of the settlement transferred ownership of
the Grooveshark trademark to the record labels. A "clone" site, using the
name grooveshark.io, popped up a few weeks later; the record labels again
sued. This time they had only to show that grooveshark.io was violating
trademark law in its use of the "grooveshark" name.
On May 13, 2015, District Court Judge Deborah Batts issued, under seal, an
order that shut down the clone site.
Three weeks later, Judge Alison Nathan ruled that Judge Batt's order also
required Cloudflare, the large
content-delivery network, to police all the sites it provided its services
for, to detect any misuse of the "Grooveshark" trademark and to take action
to block such misuse. Commentators at the time drew an analogy between this
order and SOPA/PIPA, which would have made domain-name seizures even easier
and may also have required third parties to participate in enforcement. The
order also made Cloudflare responsible for its customers' actions, which was
an apparent conflict with §230 of the CDA.
Cloudflare objected strenuously, on §230 (and other) grounds, and asked that
they be responsible only for policing infringing sites that were brought to
their attention by the record-label plaintiffs. Judge Nathan eventually
agreed, and modified her previous order. In principle, §230 means that
Cloudflare doesn't have to cooperate at all, but they did not ask for that.
More at https://www.eff.org/deeplinks/2015/07/victory-cloudflare-against-sopa-court-order-internet-service-doesnt-have-police.
Doe v Model Mayhem
(The official case title is Jane Doe No. 14 v. Internet Brands, Inc.,
DBA Modelmayhem.com.) The plaintiff signed up for modeling
contacts at Model Mayhem, and was
sexually assaulted. She claims Model Mayhem should have some responsibility
for the fact that the "agency" that contacted her through the site was a
On the one hand, Model Mayhem argued it is completely covered by §230. On
the other hand, they appear to have made no effort to warn prospective
models that not all "modeling agencies" are legitimate, and may have in fact
known that the perpetrator in Doe's case, Lavont Flanders, had assaulted
other MM models.
In 2013 the District court ruled that §230 protects Model Mayhem. In 2014
the Ninth Circuit ruled that §230 did not apply. In 2015 they withdrew that
opinion. In May 2016 the Ninth Circuit issued a new opinion, in which they
assert that MM's "failure to warn" was not tied to their
protection from user-posted content. They draw a distinction from Doe v
Myspace, in which the assault was tied to Doe's postings. The actual case
was remanded back to the District court for trial, partly on the issue of
whether MM actually did have a duty to warn.
If this ruling stands, it would mean Uber might not be able to claim §230
immunity for assaults by its drivers. Uber's business model is,
superficially, to allow riders and drivers to contact one another (by
posting on a site, in effect) to negotiate rides; taken literally, §230
might apply. Uber is, however, much more in charge of riders and, in
particular, drivers than it sometimes admits.
The Ninth Circuit seemed unwilling to apply §230 broadly here. Do you think
MM was guilty of inadequate screening? Or was there more going on? If MM is
ultimately found liable, how will that affect other websites that allow
This "failure to warn" doctrine is the sort of thing that leads to multiple
pages of rather useless warnings on many consumer products.
Hassell v Bird
Dawn Hassell is a San Francisco attorney who briefly represented Ava Bird.
Hassell withdrew from Bird's case, because Hassell felt Bird wasn't
cooperating. Someone later posted a very negative review of Hassell on Yelp.
Hassell sued Bird for defamation (apparently without proof that Bird was the
author), and won by default as Bird simply did not show up.
But then things get interesting, from a §230 perspective. The state judge
who was hearing the case issued an order requiring Yelp to remove the
review, despite the general past interpretation of §230 that third
parties have no responsibility to remove defamatory content.
In August 2016 a California appeals court agreed, despite vigorous
opposition from Yelp. The court ruled that Yelp did not in fact even have
standing to challenge the original order.
The apparent argument is that if a court finds a statement to be defamatory,
then that statement does not have First Amendment protection. (Never mind
that the First Amendment and §230 are rather different things. Never mind
that there was never any finding that the statement was
actually defamatory, as the defendant did not show up.)
The case is currently (2017) on appeal to the California Supreme Court.
Since then, there have been multiple instances of the following scenario:
- Bob makes a defamatory online comment about Alice, on Charlie.com
- Alice sues someone named "Bob"
- The case is settled, with the "Bob" named in the lawsuit admitting the
comment was defamatory
- Alice obtains a court order requiring Charlie.com to remove the
The catch? The sued party named "Bob" has no relationship to the real
Bob. I am not making this up. In several instances of this kind of "fake
lawsuit", reputation-management companies have been implicated.
Suppose Bob posted his opinion of a restaurant, or an honest criticism of
service he received somewhere. What speech protections is Bob entitled to?
Conversely, of course, suppose Alice wins a legitimate
defamation lawsuit against Bob. Under §230 today, Charlie.com would have
no obligation to remove the content. Is that fair to Alice?
In many cases, the target served with the takedown notice is not
Charlie.com, but Google.com. Should Alice be able to have Bob's comments
Former Sony chairman Michael Lynton used a possibly
legitimate process to have an unflattering Gawker article by Sam Biddle
removed from the Gawker archives (though not from Google search itself).
The article, based on emails released in the 2014 Sony hack, alleged that
Lynton donated a significant sum of money to Brown University in the hopes
of improving his daughter's chances of admission.
for further information about what actually happened.
The Gawker link is here: gawker.com/how-the-rich-get-into-ivies-behind-the-scenes-of-elite-1699066450.
But all it says is "Page Not Found".
If one tries to search for the article in the Wayback Machine, one gets
It says "Page cannot be displayed due to robots.txt".
Another article about the event, from the prestigious Chronicle of Higher
Education, is here: chronicle.com/blogs/ticker/brown-gave-special-admissions-treatment-to-donors-daughter-hacked-emails-show/97599.
This article includes a link to the now-deleted Gawker article. A Buzzfeed
article is here: buzzfeed.com/stevenperlberg/a-sony-hack-story-has-been-quietly-deleted-from-gawker.
Uber and Airbnb
Yes, Uber lives on §230 protection too. Uber's position is that they are an
online brokerage, at which drivers can bid to pick up passengers. Even if
it's not real bidding in that Uber sets the rates. But Uber argues that
drivers are independent contractors, and that §230 means Uber cannot be held
liable for any action of a driver against a passenger or vice-versa.
This is why Uber fights so hard in those lawsuits claiming their drivers are
The online-brokerage argument fits Airbnb even better. Real estate owners,
in theory, use Airbnb to post ads for their properties, much like they might
on Craigslist. What responsibility should Airbnb have? Uber, at least, sets
fairs and assigns a driver to a passenger.
Cohen et al v Facebook
The plaintiffs were survivors of attacks by Hamas; they sued Facebook on
the legal theory that Facebook "supported terrorist organizations by
allowing those groups and their members to use its social media platform
to further their aims." In particular:
Facebook allows [Hamas], its members, and
affiliated organizations to operate Facebook accounts in their own names,
despite knowledge that many of them have been officially named as
terrorists and sanctioned by various governments.
On May 18, 2017 the judge dismissed the case on §230 grounds: Facebook is
not responsible for the content posted by terrorists.
Had the plaintiffs won, it is hard to see how any messaging or email
service would emerge unscathed: a side effect of allowing communications
is that sometimes people communicate evil things.
Note that we are quite a way aways from defamation here. Note also that,
as there were 20,000 plaintiffs, Facebook's potential liability might have
More at blog.ericgoldman.org/archives/2017/05/facebook-defeats-lawsuit-over-material-support-for-terrorists-cohen-v-facebook.htm.
Section 230 immunity has led to the rise of so-called "revenge sites", sites
that specialize in the posting of "revenge" information. One such site is ripoffreport.com, run by Xcentric.
In litigation in 2011, Judge Cortiñas of the Florida appellate court stated
The business practices of Xcentric, as
presented by the evidence before this Court, are appalling. Xcentric
appears to pride itself on having created a forum for defamation. No
checks are in place to ensure that only reliable information is
publicized. Xcentric retains no general counsel to determine whether its
users are availing themselves of its services for the purpose of tortious
or illegal conduct. Even when, as here, a user regrets what she has posted
and takes every effort to retract it, Xcentric refuses to allow it.
Moreover, Xcentric insists in its brief that its policy is never to remove
a post. It will not entertain any scenario in which, despite the clear
damage that a defamatory or illegal post would continue to cause so long
as it remains on the website, Xcentric would remove an offending post.
And yet the court upheld §230 protection for the site.
(Xcentric has lost other cases. One was Xcentric v Smith, in which
the allegedly defamatory content was created by a poster named Meade who may
have had a financial relationship with Xcentric. This is a preliminary
decision. Another case was Vision Security v Xcentric, in which the
district-court judge wrote
[A] service provider is not neutral if it
“specifically encourages development of what is offensive about the
content.” ... Xcentric argues that drawing all inferences in favor of
Vision Security, it must be found to have been a neutral publisher. The
facts as alleged, however, support a contrary conclusion.
The neutral-publisher rule comes from the Tenth Circuit Federal Trade
Commission v Accusearch case, above. It also turned out that, despite
ripoffreport's claim that they "never remove a post", they will do so for
a significant fee.)
There are also issues with individuals bent on revenge. Canadian
teacher Lee David Clayworth had a romantic relationship with Lee Ching Yan.
After it ended, Yan posted negative information relentlessly about
Clayworth, including nude pictures and allegations that he had slept with
underage students. As a result, Clayworth has found it difficult to find a
new job as a teacher.
However, if you google
his name since ~2014, almost all the links Google supplies are to articles
about his misfortune. (There was one link to liarscheatersrus.com).
So in some sense publicizing his case has had the indirect effect of
clearing his name.
Another example is Sue
Scheff, who used Dozier Internet Law to sue Cory Bock for defamation
and won an $11 million judgement. Ms Bock was unable to attend the trial, as
at the time she was homeless in the aftermath of Hurricane Katrina. But
Scheff's real success has been in using the Internet to rehabilitate her
Recently there has been a rise in so-called "revenge porn" sites, at which
one member of an ex-couple can post intimate pictures of the other. Should
§230 protect such sites?
Barnes was a revenge-porn victim; the posts appeared on Yahoo under her name
although they were in fact posted by Barnes' ex-boyfriend. A Yahoo "director
of communications" agreed to arrange for the posts to be removed, in
accordance with Yahoo's terms-of-service rule that disallowed posting under
someone else's name, but they were not. Barnes eventually sued. In 2009, the
Ninth Circuit ruled that §230 protected Yahoo, although they did eventually
remove the posts. The Ninth Circuit did rule that Yahoo could be
sued for breach of promise, as they'd promised to take down the posts and
then did not (it is not clear why).
Here are two theories from a California Law Review article by Zak Franklin (http://www.californialawreview.org/wp-content/uploads/2014/12/05-Franklin.pdf).
This Comment rejects that conclusion and
articulates two theories that might enable a plaintiff to persuade courts
that many website operators are responsible for the harmful content on
their sites, and therefore are liable as information content providers
under Section 230. First, where an operator has added original material, a
victim-plaintiff can argue that the revenge porn website operator
contributed to the illegality of the post. Second, a plaintiff can argue
that the operator is responsible for the content because the operator
The first argument is reasonable, if the site operator did in fact
contribute. The second is much harder, in light of the Jones v The Dirty
One thing to bear in mind, however, is that the original poster of revenge
porn is almost always readily identifiable. Criminalizing such posting can
thus go a long way towards preventing it, even if the sites cannot be shut
California in 2013 criminalized the posting of nude images of someone
without their consent, with the intent to cause distress. Where does that
leave the following?
In the other direction, the California law only applies in "circumstances
where the parties agree or understand that the image shall remain private".
This opens up a host of possible arguments as to interpretation, especially
if the image was recorded outdoors.
- Pictures of people at, say, the San Francisco Folsom Street Fair
- Long-lens paparazzi shots of celebrities
- Documentation of sex crimes or inappropriate activity
- Publishing selfies by Anthony Weiner
Oddly, the California law apparently excludes photos obtained by hacking.
In Illinois it is now a felony to post "sexually explicit videos or photos
of another person, without their consent". It would also make it illegal to
"host a website that requires victims to pay a fee to have the explicit
photos removed". The former would appear to cover all the categories above.
As for the latter, if the server or site operator were outside of Illinois,
would Illinois have jurisdiction? If so, wouldn't the site simply refuse to
remove the picture under any circumstances?
Note that if the image was a "selfie", the person pictured generally owns
the copyright, and can request takedown under the DMCA. Alternatively, Alice
and Bob can agree (preferably in writing) that Bob owns the copyright to any
pictures taken by Alice of Bob in a state of déshabillé .
Generally, such laws require that the poster had "malicious intent".
Paparazzi pictures generally do not fall into that category, but celebrities
may latch on to any available tool to discourage paparazzi.
On June 19, 2015, Google introduced a new policy of taking down search
links to revenge porn. The announcement is at http://googlepublicpolicy.blogspot.com/2015/06/revenge-porn-and-search.html.
Google is not, of course, able to remove the "original" posts, but making
images unsearchable is a big step.
Google notes that
This is a narrow and limited policy, similar
to how we treat removal requests for other highly sensitive personal
information, such as bank account numbers and signatures, that may surface
in our search results.
One might categorize these as limiting search for data that has essentially
no public-policy utility. It remains to be seen how Google treats
paparazzi pictures of déshabillé celebrities, or pictures such as
Weiner's infamous selfie. (As a side note, what category
might describe Weiner's picture? Harassment, maybe, but that isn't the full
See also http://www.slate.com/blogs/future_tense/2015/06/19/google_announces_plan_to_remove_revenge_porn_from_search_results.html
Note that Google's policy here is another step towards acknowledging at
least some limited "right to be forgotten".
In May 2016, Kevin Bollaert was convicted and sentence to 18 years in prison
(later reduced to 8 years) for running the now-down UGotPosted.com. The
trial court largely ignored Bollaert's §230 defense, however; it is not
clear if an appeal on this point is underway. See also https://www.techdirt.com/articles/20160507/01180534366/revenge-porn-creep-kevin-bollaerts-appeal-underway-actually-raises-some-important-issues.shtml.
The SAVE Act
The name stands for Stop Advertising Victims of Exploitation; it is not
related to the Campus SaVE act. It was signed into law in 2015, and
represents the first legal step away from blanket §230 immunity. It creates
a criminal liability for websites that knowingly run prostitution
advertisements for victims of "sex trafficking", generally meaning people
forced into prostitution. This would make it much more difficult for sites
like Craigslist to run sex
ads, except those seem to have already ended. Its real goal may be the
closure of backpage.com, which
continues to run ads for "escorts". (In January 2017 Backpage finally did
cave in to pressure and removed all content from its "adult" sections.)
Prevention of sex trafficking is unarguably an important goal, but one
sometimes suspects that the "real" goal of some anti-trafficking laws is to
crack down on prostitution itself. It appears that the vast majority of sex
workers have chosen their occupation voluntarily; furthermore, online
advertising is a much safer way for prostitutes to meet clients than
streetwalking. It is hard to get accurate numbers on sex trafficking, but
some numbers seem clearly inflated. Some people (mostly at the right edge of
the political spectrum) regard all prostitution as inherently "forced"; this
seems a bit of an overstatement. In 2016 Amnesty International adopted a
for decriminalization of prostitution. They took this position to make
it easier to fight sex trafficking.
The San Francisco site myredbook.com was shut down in 2014. The article http://www.wired.com/2015/02/redbook/
has some anecdotes on this, but little data. Another article is http://www.rawstory.com/2014/07/fbi-seizure-of-my-red-book-website-spurs-san-francisco-bid-to-decriminalize-prostitution/,
suggesting there is increasing sentiment to decriminalize prostitution in
§230 has been called "the law judges love to hate", and judges do indeed,
despite years of precedent, routinely rule against §230 protection. So stay
tuned to recent decisions. Eric Goldman blogs about most §230 cases at blog.ericgoldman.org.
See also the EFF list at www.eff.org/issues/cda230.
While §230 gives sites like YouTube and FaceBook immunity for user-posted
content, most mainstream sites implement a considerable degree of
self-censorship, to avoid offending users. In the industry this is often
referred to as "moderation" of the site. Banned content typically includes
nudity, violence, hate speech, and often a variety of other offensive
topics. Usually the details of a site's "acceptable-content" policy,
however, are not spelled out.
A good article on this is http://www.theverge.com/2016/4/13/11387934/internet-moderator-history-youtube-facebook-reddit-censorship-free-speech.
Do organizations and people who experience such online defamation and
"cyberbullying" deserve some means of redress? Often Section 230 is not
the issue; the idea would be to go back to the original poster.
Here is a list of a few of those who have committed suicide due to intense
online harassment. There are many more.
||Sept 9, 2013
||Florida; Ask.fm and others; two other students arrested on felony
||Dec 11, 2012
||Sept 29, 2012
||Oct 17, 2006
||Lori Drew created fake account
||Oct 7, 2003
Rebecca Sedwick's mother moved her to a new school and closed her Facebook
account. But she did not know about ask.fm,
where Rebecca re-encountered her harassers.
To what extent should harassment online be illegal?
Should it be illegal to tell a middle-school student to "go drink bleach and
die"? What about to your Congressional representative?
There is such a thing! From http://law.jrank.org/pages/1563/Libel-Criminal.html:
At common law, libel was
recognized as a criminal misdemeanor as well as an individual injury
justifying damages (a tort). Prosecutions of the offense had three goals:
protection of government from seditious statements capable of weakening
popular support and causing insurrection; reinforcement of public morals
by requiring a "decent" mode of community discourse; and protection of the
individual from writings likely to hold him up to hatred, contempt, or
ridicule. The protection of the individual, a goal that is generally left
to tort law, was justified by the criminal law's responsibility for
outlawing statements likely to provoke breaches of peace.
It's hard to see how anything on the internet could result in an immediate
breach of the peace, as compared, say, to leafleting at a protest march,
or using a bullhorn to incite a crowd. Criminal libel prosecutions have
been extremely rare for the past ~70 years. When they do occur, it usually
represents either an overzealous police department or someone rich and
powerful who doesn't want to bring a civil suit directly. Under
criminal-libel laws, the government foots the bill for what arguably
should be the plaintiff's position.
Criminal Libel is sometimes justified as
(and sometimes limited to) a way of protecting the reputations of the
dead; living people can sue.
Here is a 2003 example at the University
of North Colorado involving a new satirical newsletter published by Thomas
To spice up the first issue, Mink
doctored a photograph of well-known UNC finance professor Junius
Peake so that he resembled Gene Simmons of KISS in full makeup.
Mink described his digital creation as "Junius Puke," editor in
chief of the publication.
The police charged Mink, but the
local prosecutor insisted that Mink "was in no danger of
prosecution"; ie, his office would never have followed up on
prosecuting the case. However, this was less clear to Mink, and the
original arrest and equipment seizure was apparently solely for criminal libel. Mink's case was not
dropped until he went before a federal judge in Colorado.
Colorado is apparently serious about
this. From 2008:
COLLINS, Colo. — A man (J.P.
Weichel) accused of
making unflattering online comments about his ex-lover and her
attorney on Craigslist has been charged with two counts of
obtained search warrants
for records from Web sites including Craigslist before
identifying Weichel as the suspect.
Note that a search warrant cannot be obtained in a civil suit!
The doctrine of criminal libel
is severely at odds with free speech. Nonetheless, it may be
on the rise, as states see it as the only way to rein in the
runaway Internet libel released by §230.
Another libel legal theory is that of group
libel: you can be
sued if you make defamatory remarks about a group of people
(eg a racial/ethnic/religious group), without singling out any
specific individual. The courts have over the years not been
terribly receptive to this theory.
Google Conviction in Italy
On February 24, 2010, three executives of Google were convicted in Italy of
violating criminal privacy laws; each received a six-month suspended
sentence. At issue was Youtube's delay in removing a video in 2006 of four
youths beating a boy with autism and/or Downs syndrome.
Google complied with a request from Italian police for removal of the video,
but possibly was not so prompt in responding to earlier requests. Under
Italian privacy law, videos cannot be posted
online without the consent of all participants (Illinois has a
similar law regarding audio recordings, though the Illinois law simply
forbids the act of recording itself).
The Italian prosecutor's argument was that, through advertising revenue,
google profited from the video, and thus was criminally responsible.
In the US, §230 of the CDA makes Google immune to civil prosecution in such
cases; free-speech rights ensure Google would be immune to criminal
The European Union has issued Directive 2000/31/EC, dated June 8, 2000 (http://eur-lex.europa.eu/LexUriServ/LexUriServ.do?uri=CELEX:32000L0031:EN:HTML),
was intended in part to limit ISP liability. See paragraph 40, for example.
However, the directive is rambling and quite lengthy, and the fact that
google profited from the Youtube video through advertising seems to have
been interpreted by the Italian prosecutor as voiding ISP status. Note that
prologue paragraph 40 states that one goal is the "development of rapid and
reliable procedures for removing and disabling access to illegal
information". The last phrase is quite striking in and of itself:
what makes information intrinsically illegal? Prologue paragraph 42 states
The exemptions from liability established
in this Directive cover only cases where the activity of the information
society service provider is limited to the technical process of operating
and giving access to a communication network over which information made
available by third parties is transmitted or temporarily stored, for the
sole purpose of making the transmission more efficient; this activity is
of a mere technical, automatic and passive nature, which implies that the
information society service provider has neither knowledge of nor control
over the information which is transmitted or stored.
The relevant part of the actual directive is as follows:
1. Where an information society service is provided that consists of
the transmission in a communication network of information provided by a
recipient of the service, or the provision of access to a communication
network, Member States shall ensure that the service provider is not
liable for the information transmitted, on condition that the provider:
(a) does not initiate the transmission;
(b) does not select the receiver of the transmission; and
(c) does not select or modify the information contained in the
The New York Times has suggested that one issue was Italian prime minister
Silvio Berlusconi's control of television and traditional media, which
compete with the Internet. The case at hand, if upheld, could make YouTube
unavailable in Italy.
In December 2012 an Italian appeals court overturned the three convictions.
The state has not appealed further.
Finally, note that, by all accounts, YouTube has been very successful in
filtering out nudity, even mild forms. They could probably figure out how to
filter other things, if they really wanted to.
How does this google/youtube problem differ from the problem faced by
craigslist for unfair-housing and prostitution ads? Note that the only
reason craigslist ran into trouble, rather than, say, ebay, was that the
latter does not provide useful local listings (and is not free to listers).
See Baase 4e p 148 / 5e p 154
1996: AOL v Cyber Promotions
Note that CP initially sued AOL for blocking CP's spam! Eventually AOL
Intel-Hamidi case: Ken Hamidi sent email to 30,000 intel employees.
Intel sued. It eventually reached the California Supreme Court, who ruled
in Hamidi's favor. Hamidi's emails were about how Intel employees could
better understand their employment rights.
Harris Interactive sued the Mail Abuse Prevention System, for blocking
their opinion-poll email. One interesting claim by Harris is that they
were "turned in" to MAPS by a competitor. Harris dropped the suit.
People have a right to send email. Sort of.
So far we have been looking at §230 and libel safeguards for sites that
"quoted" someone else. What about sites such as those below where original
negative information is posted? It might seem the only defense is if the
negative information is truthful, and the targets are all large
organizations with large legal budgets. All they have to do is find the site
has a single piece of false negative information....
And yet no libel lawsuits have been filed. Why?
Libel and Internet complaints about corporations
A selected few "sucks" sites. Search for (large
company name) + "sucks" to find more. These sites come and go quickly.
How can "sucks" sites get away with criticizing major corporations? Because
sometimes suing complainers just does not have the desired effect. More
often than not, it draws more public attention to the alleged corporate
misbehavior; see the Streisand effect.
The McLibel case
In the late 1980's, Dave Morris and Helen Steel participated with others
in an organization known as London Greenpeace (unaffiliated with
Greenpeace International) with handing out leaflets (remember those?) at
local McDonalds stores. The leaflets made claims such as the following:
- McDonalds' land use led to displacement & hunger for third-world
- locals starve while food crops are exported for use as animal feed
- rainforest destruction
- destruction of tribal lifestyles in rainforest
- mcdonalds food is high in fat
- encourages overeating
- encourages children to think McD's is "normal"
- hamburgers are made of dead animals
- unions are not allowed
Note that their story had NOTHING to do with the internet! Though,
today, the group most likely would have a website.
McDonalds had done a great deal of investigating; they had hired spies to
infiltrate London Greenpeace to get names of members involved. This wasn't
entirely coordinated; two spies spied on each other for an extended
period. Another spy had a long romantic relationship with a real member.
In 1990, McDonalds sued everyone in the group for libel. Everyone folded
except for Morris and Steel.
The case went on for two and a half years, the longest civil case in
English history. Morris & Steel raised £35,000 for their defense, most
of which apparently went to paying for transcripts.
Technically, Morris and Steel lost. Recall that in England the defense
in a libel trial has to prove their claims are true; in the US the
plaintiff must prove the claims false. From http://mcspotlight.org/case/trial/story.html:
Mr Justice Bell took two hours to read his
summary to a packed court room. He ruled that Helen and Dave had not
proved the allegations against McDonald's on rainforest destruction, heart
disease and cancer, food poisoning, starvation in the Third World and bad
working conditions. But they had proved that McDonald's "exploit children"
with their advertising, falsely advertise their food as nutritious, risk
the health of their most regular, long-term customers, are "culpably
responsible" for cruelty to animals, are "strongly antipathetic" to unions
and pay their workers low wages.
And so, Morris & Steel were held liable for £60,000 in damages.
As a practical matter, though, McDonalds was exposed throughout the case
to over five years of increasingly bad -- make that dreadful --
press. Most ordinary people were offended that a huge corporation would
try so hard to squelch criticism; the business community was none too
supportive of McDonalds either. McDonalds' bungling spies didn't help any.
On appeal, the court agreed with Morris and Steel that McDonalds' food
might be considered "unhealthy", but Morris and Steel had also claimed it
was "carcinogenic". The judgement was reduced to £40,000
On 15th February 2005, the European Court of Human Rights in Strasbourg
declared that the mammoth McLibel case was in breach of the right to a
fair trial (because Morris and Steel were not provided with an attorney)
and right to freedom of expression.
The bottom line for McDonalds, and for corporations generally, is that
taking a critic to court for libel can be a very risky strategy. (There
are still many individual libel lawsuits, but these are
The phrase Libel Terrorism is a
play on "libel tourism", the practice of suing for libel in the UK (or
another friendly venue, though it's hard to beat the UK's "defendant must
prove truth" doctrine, plus the "plaintiff need not prove malice"
New York now has the Libel Terrorism Protection Act.
Case: Sheikh Khalid bin Mahfouz v
Rachel Ehrenfeld wrote Funding Evil,
a rather polemical book about how terrorist organizations gain funding
through drug trafficking and other illegal strategies. The first edition
appeared in 2003. The book apparently alleges that Sheik Khalid bin
Mahfouz is a major participant in terrorist fundraising. Mahfouz
sued in England, although the book was not distributed there; however, 23
copies were ordered online from the US. In 2005 the court in England found
in Mahfouz's favor, describing Ehrenfeld's defense as "material of a
flimsy and unreliable nature" (though some of that may have been related
to the costs of mounting a more credible defense, and Ehrenfeld's
conviction that no such defense should be necessary), and ordered
Ehrenfeld to pay $225,000.
Ehrenfeld filed a lawsuit against Mahfouz in the US, seeking a
declaration that the judgement in England could not be enforced here. The
case was dismissed because the judge determined that the court lacked
jurisdiction over Mahfouz. A second ruling arriving at the same conclusion
came in 2007.
In May 2008, New York state passed the Libel Tourism Protection Act, that
offers some form of protection against enforcement in New York state of
libel claims from other countries. However, Mahfouz has not sought to
collect, and probably will not.
gatt.org, and cyberhoaxes
(compare wto.org and wipo.int)
This is vaguely related to McLibel-type sites, in that this is an attack
on the "real" WTO (which was formerly the Generalized Agreement on Trade
& Tariffs, or GATT). Is this funny? Or serious? Are there legitimate
Note that it keeps changing.
Try to find the links that are actually there.
gatt.org links and Dow's Acceptable Risk seem pretty permanent.
There are plenty of genuine libel lawsuits on the Internet. But
Section 230 and the McLibel Effect seem to eliminate a good chunk of
With libel, the Ninth Circuit in the Batzel-v-Cremers case interpreted
§230 as saying you have immunity for posting material originated from
someone else, if your
understanding was that the other party intended the material for posting.
With "threat speech", the courts have long held that speech qualifies as
that if a reasonable listener
(or reader) feels that a threat is intended. Your intentions may not
count at all.
Planned Parenthood v American Coalition of Life
In the case Planned Parenthood v American Coalition of Life Activists
(ACLA, not to be confused with ACLU, the Americal Civil Liberties Union),
Planned Parenthood sued ACLA for a combination of "wanted" posters and a
website that could be appeared as threatening abortion providers. In early
1993 a "wanted" poster for Dr David Gunn, Florida, was released; on March
10, 1993 Dr Gunn was murdered. Also in 1993, a wanted poster for Dr George
Patterson was released and on Aug 21, 1993 Dr Patterson was subsequently
murdered, although there was never a claim that he was murdered for
providing abortion services, and there was some evidence that his murder
was part of a random altercation. Two days before, Dr George Tiller was
shot and wounded in Kansas; Tiller was murdered in a later attack May 31,
2009. In 1994 a poster for Dr John Britton, Florida, was released; Dr
Britton was later murdered, along with James Barrett, on July 29, 1994. Dr
Hugh Short was shot November 10, 1995; he was not killed, but he could no
longer perform surgery.
There was never any evidence that the ACLA itself participated in any of
the assaults, or had any direct contact with those who did; there were
plenty of individual antiabortion extremists who were apparently willing
to carry these out on their own.
I've never been able to track down any of these individual posters (which
is odd in and of itself), but here's a group one:
When US Rep Gabrielle Giffords (D, AZ) was shot in January 2011, some
people pointed to the poster below from Sarah Palin's site, and from her
twitter line, Don't
Instead - RELOAD! A June 2010 post from Giffords election opponent
Jesse Kelly said, "Get
Target for Victory in November Help remove Gabrielle Giffords from
office Shoot a fully automatic M16 with Jesse Kelly"
But there are multiple differences. Perhaps the most important is that no
new crosshair/target/wanted-style posters have been released by anyone
since the Tucson shootings. Under what circumstances might people view
this kind of poster as a threat? Should candidates and political-action
committees be required to address perceived threats?
Neal Horsley was an anti-abortion activist pretty much
unaffiliated with ACLA. He maintained a website he called the "Nuremberg
Files" site, with the nominal idea of listing names of abortion providers
for the day when they might be tried for "crimes against humanity" (in
genuine crimes-against-humanity cases, the defense "it was legal at the
time" is not accepted).
On Oct 23, 1998, Dr Bernard Slepian was killed at home. The day before,
according to Horsley, his only intent was to maintain a list of
providers; the day after, he added Dr Slepian's name with a strikethrough.
Strikethroughs were also added to the names of Drs Gunn, Patterson and
Britton. Dr Slepian's name had not been on Horsley's list at all
before Slepian's murder, leading Horsley to protest vehemently that his
site could not have been a threat against Slepian. The lawsuit, however,
was filed by other physicians who felt it was a threat to them;
Horsley is silent on this.
At the conclusion of the trial, the judge ordered (among other things)
that Horsley not use the strikethrough
Why would a judge issue rules on what typestyle (eg strikethrough)
a website could use? Did the judge in fact issue that ruling, or is that
just an exaggeration from the defendants? The actual injunction (from the
District Court link, below) states
In addition, defendants are enjoined from
publishing, republishing, reproducing and/or distributing in print or
electronic form the personally identifying information about plaintiffs
contained in Trial Exhibits 7 and 9 (the Nuremberg Files) with
a specific intent to threaten. [emphasis added by pld]
That is much more general than just "no strikethrough", though the
strikethrough was widely interpreted as a "specific intent to threaten".
But intent is notoriously hard to judge, and in fact (as we shall see) the
case ended up hinging more on the idea that Horsley's site would be interpreted as a threat by a neutral
If you create a website, who should interpret your intent?
Horsley's original site was at christiangallery.com (no longer active).
Here is an archive of Horsleys' site with
the original strikethrough: aborts2.html
(Dr Gunn is column 2 row 8).
After the Ninth Circuit's ruling, Horsley replaced his list of names
with a pro-abortion site's list of providers who had been
injured or murdered; an abbreviated archive is at aborts.html.
Lower down on his original version of this page, Horsley used
strikethrough for names of women who had died as a result of receiving an
Horstley also created a separate page discussing the Ninth Circuit's
ruling and a related California provider-privacy law, in an effort to
explain his position. He reproduced part of his original list with
strikethroughs. A portion of this page is archived at californicate.htm.
After looking at these, consider Horsley's claim,
All we've done, and all really anybody's
accused us of doing, is printing factually verifiable information... If
the First Amendment does not allow a publisher to publish factually
verifiable information, then I don't understand what the First Amendment's
Do you think this is an accurate statement?
The civil case was filed in 1995, after some abortion providers had been
murdered (eg Dr Hugh Short) and "wanted" posters were issued by ACLA for
others. There was a federal law, the 1994 Federal Freedom of Access to
Clinic Entrances Act (FACE), that provided protections against threats
to abortion providers.
Horsley's site was created in 1997, and was added to the case;
apparently Horsley himself was not added. By 1997, the internet was no
longer new, but judges were still having difficulty figuring out what
standards should apply. In retrospect it seems reasonable to think that,
if it were not for the context created by the "Wanted" posters, there
would have been no issue with the Nuremberg Files web pages.
Horsley's actual statements are pretty much limited to facts and to
opinions that are arguably protected. He does not
appear to make any explicit calls to violence.
Planned Parenthood, on the other hand, claimed the site "celebrate[s]
violence against abortion providers".
For a while, Horsley was having trouble finding ISPs willing to host his
site. The notion of ISP censorship is an interesting one in its own right.
The Stanford site, below, claims that OneNet, as the ISP (carrying traffic
only) for the webhosting site used by Horsley, demanded that Horsley's
content be removed.
Here's a Stanford
group's site about the case.
The central question in the case is whether the statements amounted to a
"true threat" that met the standard for being beyond the bounds of
The District Court judge (1999) gave the jury
instructions to take into account the prevailing
climate of violence against abortion providers; the jury was
also considering not an ordinary civil claim but one brought under the
Freedom of Access to Clinic Entrances act (FACE), which allows
lawsuits against anyone who "intimidates" anyone providing an abortion.
(The first-amendment issue applies just as much with the FACE law as
without.) The jury returned a verdict against the ACLA for $100 million,
and the judge granted a permanent injunction against the Nuremberg
Files site (Horsley's).
The DC Judge wrote (full order at PPvACLA_trial.html):
I totally reject the
defendants' attempts to justify their actions as an expression of opinion
or as a legitimate and lawful exercise of free speech in order to dissuade
the plaintiffs from engaging in providing abortion services.
The law requires a higher level of scrutiny and proof for an injunction
involving speech than for an award of damages for violation of a
statute... I find the actions of the defendants in preparing, publishing
and disseminating these true threats objectively and subjectively were
not protected speech under the First Amendment.
Under current free-speech standards (for criminal law at
least), you ARE allowed to threaten people. You ARE allowed to incite
others to violence.
However, you are NOT allowed to incite anyone to imminent
violence, and you are NOT allowed to make threats that you personally
intend to carry out. Did the ACLA do either of these things? Does it
matter that this was a civil, not criminal, case?
Ninth Circuit Three-Judge Panel
The case was appealed to a 9th Circuit 3-judge panel, which overturned
the injunction. Judge Kosinski wrote the decision, based on NAACP
v Claiborne Hardware, SCOTUS 1982.
NAACP v Claiborne Hardware summary:
This was, like PP v ACLA, a civil case. The NAACP had organized a boycott
in 1968 of several white-owned businesses, and had posted activists to
take down names of black patrons; these names were then published and read
at NAACP meetings. The NAACP liaison, Charles Evers [brother of Medgar
Evers] had stated publicly that those ignoring the boycott would be
"disciplined" and at one point said "If we catch any of you going in any
of them racist stores, we're gonna break your damn neck."
A local merchant association sued the NAACP for lost business. The
Supreme Court found in the NAACP's favor, on the grounds that the boycott
itself was protected speech under the First Amendment. Also, there was no
evidence Evers had authorized any acts of violence, or even made any
direct threats (eg to specific individuals); Evers' "speeches did not
incite violence or specifically authorize the use of violence".
Judge Kozinski argued that whatever the ACLA was doing was less
threatening than what Evers was doing, and therefore dismissed the case.
However, another feature of the Claiborne case was that, while
there were several incidents of
minor violence directed at those who were named as violators of the
boycott, in fact nobody was seriously
harmed. And, of course, the ACLA's "wanted" posters were indeed
directed against specific individuals.
Furthermore, the merchants who brought the Claiborne case had experienced
essentially no violence whatsoever; the Supreme Court found that the nonviolent elements of the boycott
were protected speech. The plaintiff merchants had no standing to address
the allegations of violence. Seen that way, the Claiborne case
offers no precedent relevant to the ACLA case. Claiborne is not
really about the right to make vague threats.
Full Ninth Circuit
The full Ninth Circuit then heard the case, en
The ruling was by Judge Rymer, with dissents by judges Reinhardt,
Kozinski (writer of the decision of the three-judge panel that heard the
case), and Berzon (of Batzel v Cremers)
5 pages of plaintiffs / defendants
Here is Rymer's problem with the NAACP
v Claiborne analogy: 7121/41, at 
Even if the Gunn poster, which was the first
"WANTED" poster, was a purely political message when originally issued,
and even if the Britton poster were too, by the time of the Crist poster,
the poster format itself had acquired currency as a death threat for
abortion providers. Gunn was killed after his poster was released; Britton
was killed after his poster was released; and Patterson was killed after
his poster was released.
Neil Horsley claims no one was listed on the Nuremberg Files list until
after they were attacked.
But more importantly, does the temporal sequence above of "first a
poster, then a crime" constitute a "true threat"?
Here's Rymer's summary: 7092/12, 3rd paragraph
We reheard the case en banc because
these issues are obviously important. We now conclude that it was proper
for the district court to adopt our long-standing law on "true threats" to
define a "threat" for purposes of FACE. FACE
itself requires that the threat of force be made with the intent to
intimidate. Thus, the jury must have found that ACLA made
statements to intimidate the physicians, reasonably foreseeing that
physicians would interpret the statements as a serious expression of
ACLA's intent to harm them because they provided reproductive health
7093/13 We are independently satisfied that
to this limited extent, ACLA's conduct amounted to a true threat and is
not protected speech
Threats are not the same as libel: 7099/19
Section II: (p 7098/18) discussion of why the court will review the
facts (appeals courts sometimes don't) as to whether ACLA's conduct was a
Section III (p 7105) ACLA claims its actions were "political speech" and
not an incitement to imminent lawless action. Posters have no explicitly
7106/26, end of 1st paragraph:
ACLA submits that classic political speech cannot be converted into
non-protected speech by a context of violence that includes the
independent action of others.
This is a core problem: can context
be taken into account? Can possible
actions of others be taken into account?
The text of the FACE law:
Whoever... by force or threat
of force or by physical obstruction, intentionally injures, intimidates or interferes with or
attempts to injure, intimidate or interfere with any person because that
person is or has been [a provider of reproductive health services]
[n]othing in this section shall be construed . . . to prohibit any
expressive conduct ... protected from legal prohibition by the First
This subjects them to civil remedies, though perhaps not prior
The decision cited the following Supreme Court cases:
Brandenburg v Ohio, criminal
case, SCOTUS 1969: The First Amendment protects speech advocating
violence, so long as the speech is not intended to produce "imminent
lawless action" (a key phrase introduced) and is not likely to produce
This was an important case that strengthened and clarified the "clear
and present danger" rule (speech can only be restricted in such
situations) first spelled out in Schenck v US, 1919. Brandenburg
introduced the "imminent lawless action" standard.
Clarence Brandenburg was a KKK leader who invited the press to his
rally, at which he made a speech referring to the possibility of
"revengeance" [sic] against certain groups. No specific attacks OR TARGETS
Robert Watts v United States,
criminal case, SCOTUS 1969. Watts spoke at an anti-draft rally at a
"They always holler at us to get an
education. And now I have already received my draft classification as 1-A
and I have got to report for my physical this Monday coming. I am not
going. If they ever make me carry a rifle the first man I want to get in
my sights is L.B.J."
Watts' speech was held to be political hyperbole. This case overturned
long precedent regarding threats.
Particular attention was given to NAACP
v Claiborne, considered above. The crucial distinction: there was
no actual violence then! The Supreme Court's decision was in effect that
Evers' speeches did not incite illegal activity, and thus he could not be
found liable for any business losses. No "true threat" determination was
made nor needed to be made.
Also, Evers' overall tone was
to call for non-violent actions
such as social ostracism.
Here is another important case cited as a precedent, also decided by the
Ninth Circuit, in which a threat was ruled legitimate:
Albert Roy v United States,
criminal case, Ninth Circuit, 1969:
USMC private Roy heard that then-President Nixon was coming to the base,
and said to a telephone operator "I hear the President is coming to the
base. I am going to get him". Roy's conviction was upheld,
despite his insistence that his statement had been a joke, and that he had
promptly retracted it. This case was part of a move to a "reasonable
person" test, eventually spelled out explicitly by the Ninth Circuit in
its case United States v
Whether a particular statement may properly
be considered to be a threat is governed by an objective standard --
whether a reasonable person would foresee that the statement would be
interpreted by those to whom the maker communicates the statement as a
serious expression of intent to harm or assault.
Note this "reasonable person" standard. On the one hand, this means no
hiding behind "that's not really what we meant"; on the other hand, what
if violence is not what you really meant? In Roy's case, part of
the issue was that the threat was reasonable enough to frighten the
telephone operator, and thus to affect the security preparations for the
(All three of these last cases were criminal cases. In
the 2015 Elonis case, below, the Supreme Court ruled that the "reasonable
person" standard was not normally sufficient for criminal
threat prosecution; this standard was for civil cases only. The PP v ACLA
case was, of course a civil case.)
In the PP v ACLA decision, the Ninth Circuit wrote:
It is not
necessary that the defendant intend to, or be able to carry out his
threat; the only intent requirement for a true threat is that the
defendant intentionally or knowingly communicate the threat.
The defendant must communicate it as a serious threat, that is, not just
In an amicus brief, the ACLU argued the person must have intended
to threaten or intimidate.
Rymer: this intent test is included in the language of FACE; ACLA has
met this test long ago. Did ACLA
intend to "intimidate"? Or were the "wanted" posters more hyperbole?
Two dissents in the decision argue that the speaker must "actually
intend to carry out the threat, or be in control of those who will"
But Rymer argues that the court should stick with the "listener's
reaction"; ie the reasonable-person standard again.
Here's the conclusion of Rymer's line of argument on intent v how it is
7116/36, at  Therefore, we hold that
"threat of force" in FACE means what our settled threats law says a true
threat is: a statement which, in the
entire context and under all the circumstances, a reasonable
person would foresee would be interpreted by those to whom the
statement is communicated as a serious expression of intent to inflict
bodily harm upon that person. So defined, a threatening statement
that violates FACE is unprotected under the First Amendment.
Crucial issue: the use of the strikeout and grey-out. This is what
crosses the line.
7138/53, 2nd paragraph:
are a true threat because, like Ryder trucks or burning crosses, they
connote something they do not literally say, yet both the actor and the
recipient get the message.
The Supreme court refused to hear the case. The Ninth Circuit had
established that the speech in question met certain standards for being a
true threat, and the ACLA would have had to argue that some factual
interpretations were mistaken. But the Supreme Court does not generally
decide cases about facts; they accept cases about significant or
conflicting legal principles.
See also Baase, 4e p 173, Exercise 3.22 below(omitted from 5e); note that
the "controversial appeals court decision" refers to the three-judge
panel, reversed by the en banc decision.
3.22 An anti-abortion Web site posts
lists of doctors who perform abortions and judges and politicians who
support abortion rights. It includes addresses and other personal
information about some of the people. When doctors on the list were
injured or murdered, the site reported the results. A suit to shut the
site for inciting violence failed. A controversial appeals court decision
found it to be a legal exercise of freedom of speech. The essential issue
is the fine line between threats and protected speech, a difficult issue
that predates the Internet. Does the fact that this is a Web site rather
than a printed and mailed newsletter make a difference? What, if any,
issues in this case relate to the impact of the Internet?
Finally, you might wonder why, with all the threats of violence made during
the course of the civil rights movement by whites against blacks, the
case NAACP v Claiborne that comes to us is an allegation of violence by
blacks against blacks, filed by whites. I think it's safe to say that
the answer has nothing to do with who made more threats, and everything to
do with who could afford more lawyers.
Traditionally, it has been the "conservative" position that all threats of
violence are to be taken seriously, and that maintenance of good social
order often trumps individual rights. Similarly, it is traditionally the
"liberal" position that in some cases about threats there is a legitimate
Free Speech issue, and that our rights may often trump maintenance of the
status quo. Note, however, that some have identified the Ninth Circuit's
ruling here as another "liberal" opinion.
In June 2015, the Supreme Court ruled in Elonis
v US that the conviction of Anthony Elonis for threats posted on
Facebook against his ex-wife, his children and his co-workers was improper,
due to flawed jury instructions.
Each of the Elonis threat posts did include a disclaimer referring to the
Elonis himself argued that he should only have been convicted if he intended
to carry out the threats.
The Supreme Court ruled that, for criminal threat convictions, it
was not enough that a "reasonable person" would understand the
threats as genuine", which is what the jury was instructed.
Elonis's conviction was premised solely on
how his posts would be viewed by a reasonable person, a standard feature
of civil liability in tort law inconsistent with the conventional criminal
conduct requirement of "awareness of some wrongdoing,"....
PP v ACLA, however, was a civil case, and the "reasonable person"
standard remains applicable there.
The Elonis ruling might
have changed the outcome of Roy v
US (also a criminal case), but that case was 46 years earlier.
The Supreme Court added that they did not find Elonis
to be a First-Amendment case:
Given the disposition here, it is
unnecessary to consider any First Amendment issues.
The case also emphasized that criminal convictions should always require
some element of criminal intent, or mens rea.
this was a book published by Paladin
Press, written by "Rex Feral". There is a story circulating that the
author is a woman who writes true-crime books for a living, but this seems
speculative. It is likely not written by an actual hit man.
In 1993, James Perry murdered Mildred Horn, her 8-year-old son Trevor,
and nurse Janice Saunders. He was allegedly hired by Lawrence Horn. In
Rice v Paladin Enterprises (1997), the federal court of appeals (4th
circuit) held that the case could
go to jury trial; ie freedom-of-press issues did not automatically prevent
Many of the specifics of the Perry murders were out of the book. Many of
them are rather compellingly "obvious": pay cash, rent a car under an
assumed name, steal an out-of-state license plate, use an AR-7 rifle
(accurate but collapsible), make it look like a robbery
The book also explains how to build a silencer, which is not at all
obvious; Perry allegedly did just this.
The following are from the judge's decision. "Stipulations" are alleged
facts that are not being contested at the present time.
"The parties agree that the sole issue to be
decided by the Court . . . is whether the First Amendment is a complete
defense, as a matter of law, to the civil action set forth in the
plaintiffs' Complaint. All other issues of law and fact are specifically
reserved for subsequent proceedings." (emphasis added)
Paladin has stipulated not only that, in
marketing Hit Man, Paladin "intended to attract and assist criminals and
would-be criminals who desire information and instructions on how to
commit crimes," J.A. at 59, but also that it "intended and had knowledge"
that Hit Man actually "would be used, upon receipt, by criminals and
would-be criminals to plan and execute the crime of murder for hire." J.A.
at 59 (emphasis added). Indeed, the publisher has even stipulated that,
through publishing and selling Hit Man, it assisted Perry in particular in
the perpetration of the very murders for which the victims' families now
attempt to hold Paladin civilly liable. J.A. at 61. [note 2] 
Notwithstanding Paladin's extraordinary
stipulations that it not only knew that its instructions might be used by
murderers, but that it actually intended to provide assistance to
murderers and would-be murderers which would be used by them "upon
receipt," and that it in fact assisted Perry in particular in the
commission of the murders of Mildred and Trevor Horn and Janice Saunders,
the district court granted Paladin's motion for summary judgment and
dismissed plaintiffs' claims that Paladin aided and abetted Perry, holding
that these claims were barred by the First Amendment as a matter of law.
What's going on here? Why did Paladin stipulate all that? It looks to me
like Paladin was acknowledging the hypotheticals as part of its claim that
they didn't matter, that the First Amendment protected them.
The court ruled it did not:
long-established caselaw provides that
speech--even speech by the press--that constitutes criminal aiding and
abetting does not enjoy the
protection of the First Amendment
Past cases that lost:
- publishing advice on how to make illegal drugs
- publishing advice on how to cheat on your taxes
Brandenberg v Ohio [discussed above under PP v ACLA] was cited as a case
of protected speech advocating lawlessness. But this case, due to
Paladin's stipulations [!!], was much more specific.
A popular theory was that after Paladin Press settled the case (which
they did, under pressure from their insurer), the rights to the book ended
up in the public domain. Paladin claims otherwise, and this theory makes
no sense under copyright law. However, the Utopian Anarchist Party
promptly posted the entire book at overthrow.com, and Paladin, no longer
able to profit from the book, was completely uninterested in takedown
efforts. (The bootleg copies don't have the diagrams, though.)
It has been claimed that Hit Man
was sold almost entirely to non-criminals who simply like
antiestablishment stuff. However, this is (a) speculative (though likely),
and (b) irrelevant to the question of whether some
criminals bought it.
Look at the current Paladin
website. Does it look like their primary focus is encouraging
criminals? Secondary focus?
To find Hitman, google "hit
man" "rex feral", or search Amazon.com. Most references as of 2009 were to
those selling used copies of the physical book; in 2016 Google lists more
online copies of the book and articles about its history with Paladin.
Check out Amazon.com for current prices of used editions. The site http://mirror.die.net/hitman
still has the online text.
Other bad materials:
- Encyclopedia of [Afghan] Jihad
- Bomb-making instructions generally
Note the Encyclopedia of Jihad has a significant political/religious
4th-circuit opinion: http://www.bc.edu/bc_org/avp/cas/comm/free_speech/rice.html
Should the law generally make sense? See http://xkcd.com/651
As we've seen above, threats must be "true threats" to be unprotected
speech, but the standard for that is pretty much the eye of the recipient.
Harassment of another individual is generally not protected by free-speech
laws. Computer-mediated forms of such harassment can include emails, open
and closed discussion forums, texts, or even blogs. Harassment must be
Generally, harassment must also be
directed at an individual.
- must inflict emotional harm
Incitement to Imminent Violence
standard is still good law here: inflammatory speech is permitted unless it
is intended to, and likely to, incite imminent lawless action. But specific
threats are separate.
This remains a long shot. The idea is that if someone says hateful things
about a specific ethnic, racial, or religious group,
any member of that group can file a lawsuit.
An even longer shot, except in Colorado.
ISPs and Hate Speech
ISPs are not obligated to do anything about hate speech on their customers'
web sites. They are not obligated to remove anything objectionable or
However, many ISPs do have Terms of Service forbidding hate speech.
Universities and Hate Speech
Arthur Butz, a
retired faculty member at Northwestern University, has a sideline of writing
essays (and a book) denying the Holocaust. For a long time, his faculty web
page at Northwestern contained links to all his other writings. As of now,
it appears that his other writings have been moved to another site.
Northwestern has always had a policy allowing faculty to use the internet
for a wide variety of purposes. In their Rights
Responsibilities policy, Rights
comes at the beginning and the first item under it is Intellectual
Freedom, where it is stated that,
The University is a free and open forum for
the expression of ideas, including viewpoints that are strange,
unorthodox, or unpopular. The University network is the same.
Note that the immediately following item on the list is Safety
from Threats. That is, despite the above, Northwestern does not
Other universities have disallowed student/faculty use of the internet
except for narrow academic purposes, perhaps with cases like Butz's in mind.
German regulation of hate speech
Germany's constitution states that
everybody has the right freely to express
and disseminate their opinions orally, in writing or visually and to
obtain information from generally accessible sources without hindrance.
However, German criminal law forbids
The last one has been used successfully to prosecute Holocaust deniers.
- defamation of the deceased
- incitement to violence and hatred
- public display of the Nazi swastika
- claiming as fact things that are demonstrably false
In other words, despite the wording of the German constitution, speech is
much more regulated than in the United States. That is, the German courts
have interpreted their free-speech clause less broadly than has the US
German law has generally tolerated the existence of off-shore hate-speech
websites accessible in Germany. However, there have been attempts to
prosecute when (a) there were relatively stronger grounds for claiming
jurisdiction, and (b) there were things that might have been done to
restrict access within Germany.
In 1995, Nebraskan neo-Nazi Gary Lauck
was arrested on a trip to Denmark, extradited to Germany, and convicted for
neo-Nazi materials he published in the United States, some of which were
shipped to Germany. He served four years in prison. After his release he
switched to mostly online activities; he has apparently not been arrested
In 1999, the Australian Fred Tobin
was arrested while on a trip to Germany, for Holocaust-denial activity; at
least some of this appears to have been carried out via a website Tobin
maintained in Australia; he was later convicted and served seven months in
In 2008 Germany attempted to extradite Tobin from Australia. This failed. He
was later arrested at Heathrow Airport in England, while traveling; again,
the German extradition claim failed as Holocaust denial is not a crime in
Tobin was convicted in Australia in April 2009 for violating a court order
not to include anti-Semitic materials on his website, and served three
Finally, in 1998, Felix Somm -- at
the time the German manager of CompuServe -- was convicted in Germany
because CompuServe made certain pornography available in Germany. Somm's
conviction was later overturned, apparently because Somm had absolutely no
control over the material in question and in fact had asked
CompuServe to block the material within Germany.
What if Somm, instead of asking CompuServe to block the material, had
instead thrown up his hands and said it was beyond his control?
Canada also criminalizes hate speech: it is a criminal act to "advocate or
promote genocide" or to willfully promote "hatred against any identifiable
Ultimately, the problem of jurisdiction
for speech regulation is a difficult one. We'll come to that jurisdiction
issue later, as a topic in and of itself. The US has arrested and tried
foreigners for actions that were legal where they took place, notably Dmitry
Sklyarov and David Carruthers.
International Convention on the Elimination of All Forms of Racial
From the Anti-Defamation League site above:
... nations ratifying the [ICERD] convention
are required to "declare an offence punishable by law" the dissemination
of ideas "based on racial superiority or hatred." Additionally, the
convention requires these nations to "declare illegal and prohibit" all
organizations and organized activities that "promote and incite racial
The United States signed the convention in 1966, but the Senate tacked the
following on to the ratification resolution:
The Constitution and laws of the United
States contain extensive protections of individual freedom of speech,
expression and association. Accordingly, the United States does not accept
any obligation under this Convention, in particular under articles 4 and
7, to restrict those rights, through the adoption of legislation or any
other measures, to the extent that they are protected by the Constitution
and laws of the United States
(Note that there is a long history of UN actions that various member states
have declined to accept.)
LICRA v Yahoo
See also Marc Greenberg's article at http://www.btlj.org/data/articles/18_04_05.pdf.
(Quotes below not otherwise cited are from Greenberg's article.)
Yahoo offered Nazi memorabilia for sale on its auction site. They were
sued by LICRA (originally the LIgue
Contre le Racisme et l'Antisémitisme; later the Ligue Internationale
Contre le Racisme et l'Antisémitisme), joined by the UEJF, the Union of
French Jewish Students. In France the sale of Nazi memorabilia is illegal.
This was a civil case; no criminal charges against Yahoo executives were
ever filed and no Yahoo execs were arrested while changing planes in
This is a JURISDICTIONAL case that probably should
be discussed elsewhere, except that it addresses a free-speech issue. But
this is as good a time as any to start in on some of the rationales for a
given court's claiming judicial jurisdiction related to an action that
occurred elsewhere. Here are some theories, more or less in increasing
order of "engagement":
In Batzel v Cremers, the California court decided it had jurisdiction
perhaps because of the plaintiff test.
- the "affects" test: the court decides that the remote
action affects its own local citizens in some manner. A passive website
would count here.
- the "affects intentionally" test: the court decides
that the source intended to
have an effect on its local citizens
- the "targeting" test: the court feels that the action
was directed at its local
citizens, with some level of intent.
- the "primarily affects" test: the court decides that
the action's primary effect is
on its local citizens
- the plaintiff test: the affected party (buyer or the
one defamed, for example) lives in the local jurisdiction
- purposeful availment: by choosing to engage in local
commerce, the remote entity "purposefully avails" itself of the legal
system of the local jurisdiction.
- contract: the remote site has a contract with parties
in the local jurisdiction
The LICRA v Yahoo case was heard in Paris by Judge Jean-Jacques Gomez,
who explained the French law as follows:
Whereas the exhibition of Nazi objects for
purposes of sale constitutes a violation of French law ..., and even more
an affront to the collective memory of a country profoundly traumatised by
the atrocities committed by and in the name of the criminal Nazi regime
against its citizens and above all against its citizens of the Jewish
faith . . . .
Judge Gomez decided they did have jurisdiction to hear the case. But
Yahoo US has no assets in France! There was a separate company, Yahoo
France, that controlled the yahoo.fr domain.
Judge Gomez based his jurisdictional decision on the so-called effects
test: that the actions of Yahoo US had negative effects
within France. Intent, or targeting, or direction do not enter; the
effects test is perhaps the weakest basis for claiming jurisdiction. Gomez
later explained some of his reasoning in an interview:
For me, the issue was never whether this was
an American site, whether Yahoo had a subsidiary in France, the only
issue was whether the image was accessible in France. It is true
that the Internet creates virtual images, but to the extent that the
images are available in France, a French judge has jurisdiction for harm
caused in France or violations of French law.
But in the case of my decision, it was
extremely simple: the Nazi collectibles were visible in France, this is a
violation of French law, and therefore I had no choice but to decide on
the face of the issue. Whether the site is all in English or not makes no
difference. The issue of visibility in a
given country is the only relevant issue.
Gomez issued his first interim order on May 22, 2000: that Yahoo US must
use geolocation software to block access to its auction materials within
France. It was estimated that 70% of French citizens could be blocked by
the software alone, and that another 20% would be blocked by adding a page
To continue, click here to certify that you are not in
What would the purpose of that be? Clearly, French neo-Nazis would likely
simply lie. However, other French citizens would be reminded that these
objects violated French law. What is the purpose of laws?
In November 2000, Gomez issued a second interim order fining Yahoo US
100,000 francs per day for noncompliance, after three months. (The May order
had listed 100,000 euros, some ten
times as much.) He included in his ruling evidence that not only had Yahoo
US done things that had effects in
France, but also that Yahoo US was targeting
France; the latter claim was based on the observation that, for most French
viewers visiting yahoo.com, the
advertisements displayed were in French.
When Yahoo indicated they might not comply, based on First Amendment
grounds, LICRA & UEJF suggested they might go after the assets of
yahoo.fr, though this was perhaps just overheated hyperbole.
At about the same time, Rabbi Abraham Cooper, of the Simon Wiesenthal
Center, issued his own argument against a First Amendment defense (from
It's good to try to wrap yourself
around free speech . . . but in this case it doesn't wash. Television
stations, newspapers and magazines refuse
accept some advertisements in an effort to marginalize viewpoints
and products that the vast majority of Americans think are disrespectful
or even potentially dangerous. Internet
companies . . . should just do what American companies have been doing
for half a century: reserve the right not to peddle bigotry.
The US side
At this point, Yahoo US did two things. The first was to decide, internally,
based on arguments by Rabbi Cooper and others, to ban the sale of all
"hate material" on its US site, including both Nazi and KKK memorabilia.
Books (eg Hitler's Mein Kampf) and
items issued by governments (eg German coins bearing the swastika).
Allegedly this decision was made "independently" of the decision of the
Paris court, though the review was pretty clearly prompted
by that decision. The continued sale of books and coins would not
bring yahoo US into full compliance with Judge Gomez' order. Here's a recent
quote from http://help.yahoo.com/l/us/yahoo/shopping/merchant/pricegrabber-04.html;_ylt=AqCUIEnwDUz2L4y3cFY9q3fuqCN4
spelling out the rule:
Any item that promotes, glorifies, or is
directly associated with groups or individuals known principally for
hateful or violent positions or acts, such as Nazis or the Ku Klux Klan.
Official government-issue stamps and coins are not prohibited under this
policy. Expressive media, such as books and films, may be subject to more
permissive standards as determined by Yahoo! in its sole discretion.
The second action Yahoo US took was to sue in US court for a Declaratory
that the French
court did not have jurisdiction
within the US, and that no French order or claim could be enforced in the
US. This case was Yahoo v LICRA
(the reverse order of the French case LICRA v Yahoo). Such declaratory
judgement orders are common in contract and IP cases (especially patent
cases); if party A threatens party B with a contract or patent-infringement
claim, and B believes that the suit is meritless, they can bring an action
for declaratory judgement that forces A's hand (and which also may put the
case into a more B-friendly forum). In order to ask for a declaratory
judgement, there must be an actual controversy at hand; the question may not
be moot or speculative.
The case was heard by US District Court Judge Fogel, of California. There
were two legal issues to be addressed:
Note that in the first item here, the question of whether the French court
had jurisdiction over Yahoo US is turned
around. The second question hinges on whether the controversy is
"ripe" for settlement.
- whether the US court had jurisdiction at all over LICRA and UEJF
- whether there was sufficient actual
controversy that a declaratory judgement could be issued.
For a finding of jurisdiction, there is a three-part test:
The second two parts are straightforward; the purposeful-availment test is
trickier. LICRA and UEJF had (1) sent a cease-and-desist letter to Yahoo,
(2) had requested that the French court put restrictions on Yahoo's actions
within the US, and (3) had used the US marshal's office to serve papers on
Yahoo US. Judge Fogel argued that the defendants here engaged in actions
that not only had effects on Yahoo
US, but which were also targeted
against Yahoo US; the act of targeting is strong evidence that the
purposeful-availment standard is met.
- LICRA & UEJF must have "purposefully availed" themselves of the
right to conduct some US-based transaction, in some sense acknowledging
their protection under US laws (specifically
The non-resident defendant must purposefully direct
his activities or consummate some transaction with the forum or
resident thereof; or perform some act by which he purposefully
avails himself of the privilege of conducting activities in
the forum, thereby invoking the benefits and protections of its laws;)
- The claim must arise out of LICRA & UEJF's activities within the
- The claim of jurisdiction must be "reasonable"
(Yahoo had tried to claim that, because LICRA used a yahoo.com email
address, they had thus agreed to Yahoo's terms
of service requiring US jurisdiction; apparently judge Fogel didn't
seriously consider that.)
The second part of the issue is the "ripeness" standard, that there is in
fact an actual controversy. LICRA and UEJF insisted that they were satisfied
with Yahoo US's compliance, and that they had no intention of asking for
enforcement of the 100,000-franc-per-day judgement. Yahoo, for its part,
insisted that (a) they were not in full compliance with the French court's
order, as they still allowed the sale of Nazi books and coinage, and (b)
that their free-speech rights were being chilled
by the threat of the judgement, even if further legal steps never
materialized. This is a core issue with free-speech cases: it is often the
case that party A treads on party B's free-speech rights simply by making a
threat; B might comply for the time
being, but might still want a definitive ruling.
Judge Fogel agreed, and issued his ruling in Yahoo's favor.
The Appellate decision
The 9th Circuit Appellate court, ruling en
banc, held that the US likely did have jurisdiction in the case
against LICRA and UEJF, specifically because of LICRA and UEJF's actions
against Yahoo US in French court. BUT the case was directed to be
"dismissed without prejudice", as it was not yet ready to be decided. It
was not in fact "ripe"; there was
no active controversy.
(same thing happened to US v Warshak, when the 6th circuit en
banc ruled the question was not "ripe")
The appellate decision was based squarely on the idea that Yahoo US insisted that its change of policy
regarding the sale of "hate" artifacts was not related to the French case.
As a result of that, Yahoo could not
show that their speech was in any way chilled. Therefore, there
was no actual controversy. The Appellate court also took into account the
lack of interest on the part of LICRA and UEJF of pursuing the penalties. Finally, paradoxically, the
Appellate court hinted that Yahoo could not really have believed that, if
LICRA or UEJF did ask for
penalties, that any US court would have gone along; any US court would reject such a judgement (perhaps on
First Amendment grounds despite the 9th circuit's wording here):
[E]nforcement of that penalty is extremely
unlikely in the United States. Enforcement is unlikely not because of the
First Amendment, but rather because of the general principle of comity
under which American courts do not enforce monetary fines or penalties
awarded by foreign courts.
(Note that the US court is equating the French award with a fine
or penalty, rather than a
civil judgement. I am still not
sure exactly which category actually applied.)
Ironically, because Yahoo took the ethical
approach of banning the sale of hate materials, their legal
case became moot.
Judge William Fletcher:
1. Here is a summary of Yahoo's position:
For its part, while Yahoo! does not
independently wish to take steps to comply more fully with the French
court's orders, it states that it fears
that it may be subject to a substantial (and increasing) fine if it does
not. Yahoo! maintains that in these circumstances it has a
legally cognizable interest in knowing whether the French court's orders
are enforceable in this country.
2. The French court did not ask for restrictions on US citizens. If
geolocation filtering works, in other words, the issue is moot:
The legal question presented by this case is
whether the two interim orders of the French court are enforceable in this
country. These orders, by their explicit terms, require
only that Yahoo! restrict access by Internet users located in France.
The orders say nothing whatsoever about
restricting access by Internet users in the United States.
The underlying theory here is that the worldwide scope of a website is
not a given.
3. Maybe Yahoo is ok in France.
(Note, however, that the uncertainty still hangs over Yahoo.)
A second, more important, difficulty is that
we do not know whether the French court
would hold that Yahoo! is now violating its two interim orders.
After the French court entered the orders, Yahoo! voluntarily changed its
policy to comply with them, at least to some extent. There is some reason
to believe that the French court will not insist on full and literal
compliance with its interim orders, and that Yahoo!'s changed policy may
amount to sufficient compliance.
At other points, Judge Fletcher uses the fact that neither LICRA nor UEJF
have taken further steps as additional evidence that there is no "active
controversy". Another sentence along this line is
Until it knows what further compliance (if
any) the French court will require, Yahoo! simply cannot know what effect
(if any) further compliance might have on access by American users.
And here's the kicker, dismissing the "chilled speech" issue:
Without a finding that further compliance
with the French court's orders would necessarily result in restrictions on
access by users in the United States, the only question in this case is
whether California public policy and the First Amendment require
unrestricted access by Internet users in
France. [italics in original - pld]
The First Amendment applies in the US, not in France. Not that Judge
Fletcher doesn't get this:
We are acutely aware that this case
implicates the First Amendment, and we are particularly sensitive to the harm that may result from chilling effects on
protected speech or expressive conduct. In this case, however,
the harm to First Amendment interests — if such harm exists at all — may
be nowhere near as great as Yahoo! would have us believe.
Yahoo! refuses to point to anything that it
is now not doing but would do if permitted by the orders.
That, of course, was due to Yahoo's ethical
decision not to allow the sale of hate materials.
Judge Fletcher then states
In other words, as
to the French users, Yahoo! is necessarily arguing that it has a
First Amendment right to violate French criminal law and to facilitate the
violation of French criminal law by others. As we indicated above, the
extent -- indeed the very existence -- of such an extraterritorial right
under the First Amendment is uncertain.
The first phrase here, about French
users, was omitted by some sites that reported on the decision [including
me -- pld]; that omission decidedly changes Fletcher's meaning, which is
that the First Amendment does not necesarily protect French
Fletcher concludes with the following, implicitly addressing Yahoo's
issue that they were still allowing the sale of Mein
Kampf in violation of the French orders:
There is some possibility that in further
restricting access to these French users, Yahoo! might have to restrict
access by American users. But this possibility is, at this point, highly
speculative. This level of harm is not
sufficient to overcome the factual uncertainty bearing on the legal
question presented and thereby to render this suit ripe.
These issues led to the declaration of non-ripeness.
This is a JURISDICTIONAL case
that was left undecided, officially, though the Ninth Circuit certainly
hinted that France did not have authority to demand restrictions on US
At about the same time, there was growing improvement in
advertising-based geolocation software (IP addr -> location); the
earlier blocking estimates rose from 70% to well over 90%.
Google vs Censorship
Google is the indexer (though not the publisher) of most of the world's
information. As such, it is often the target of those who want some
particular category of speech suppressed.
Google has multiple portals. US users are most familiar with google.com, but
in Great Britain there is google.co.uk.;
on June 15 2015 the latter had a Google Doodle celebrating 800 years since
the signing of the Magna Carta but this did not appear for google.com users.
In France there is www.google.fr
(but, oddly, not google.fr). In Germany there is google.de;
in China there is google.cn and in Iran
there would be google.ir, but Google does not operate there.
Google has long respected local regulations in its country-specific portals.
If one searches for "nazi" on google.com, the first link is to the American
Nazi Party; links to several other pro-Nazi rants also appear in the first
couple pages. On google.de, I could find only general-information links; the
first three are to Wikipedia articles. The Chinese site, google.cn, censors
Throughout the world, Google self-censors links to child pornography. Within
the US, Google abides by DMCA takedown notices to remove links; Google also
removes these links for the rest of its portals even though in theory the
jurisdiction of the DMCA is limited to the US.
The European Union has recognized a right to be forgotten
since at least 2006, meaning that personal records should not be publicly
available indefinitely. In 2012 the European Commission introduced new
regulations; based on these, Google introduced an option for individuals to
request that links be deleted.
On May 13, 2014 the Court of Justice of the European Union (CJEU) sided with
Mario Costeja González in his suit against Google. In 1998, a piece of
property owned by Costeja was subject to forced sale for debt payment; this
was reported in the Spanish newspaper La Vanguardia. In 2009, Costeja asked
the paper to remove the article; they refused because the original listing
was required by the Spanish government. Costeja then sued Google, in Europe,
which agreed that the search engine should remove its links to the content
in question. See http://www.theguardian.com/technology/2014/oct/21/right-to-be-forgotten-who-may-exercise-power-information.
At this point Google agreed to remove the content from google.es,
and, eventually, from other EU-based portals, but left the material
accessible via google.com.
Google also introduced changes to make access to google.com more difficult
for Europeans. First, if you typed "google.com" in the address bar, an
automatic redirect would take you to the local national portal for Google,
eg google.es. There was a small link to the "real" google.com at the bottom
of the page. Later, this link was removed for all but the original search
Still, many Europeans were not happy that their private history might still
be easily available.
On June 12, 2015, the French Commission Nationale de l'informatique et des
libertés (CNIL) ordered Google to take down all right-to-be-forgotten
listings worldwide, that is, on all its portals including
the US google.com. See http://www.cnil.fr/english/news-and-events/news/article/cnil-orders-google-to-apply-delisting-on-all-domain-names-of-the-search-engine/:
Although the company has
granted some of the requests, delisting was only carried out on European
extensions of the search engine and not when searches are made from
"google.com" or other non-European extensions.
In accordance with the CJEU
judgement, the CNIL considers that in order to be effective, delisting
must be carried out on all extensions of the search engine and that the
service provided by Google search constitutes a single processing.
Google was given fifteen days to respond.
One option is for Google to in fact delete the information, though that
leads to slope that is less slippery than precipitous: every nation wants
certain material censored.
In February 2016 Google implemented a mechanism to block the material to all
European IP addresses, using geolocation. If someone from France went to
google.com, they would not see the "deleted" information, but it would still
be visible to users outside of Europe. French users could then use
a VPN with a US termination point. The CNIL rejected
this compromise in March 2016, and fined Google €100,000. They also
held that, for liability purposes, the "Google search engine service
represents a single processing operation and the different geographic
extensions ('.fr', '.es', '.com', etc.) cannot be considered separate
processing operations". Therefore, to the extent that CNIL has jurisdiction
over google.fr, it also has jurisdiction over google.com.
in May 2016. See their
statement, in which they say "as a matter of both law and principle, we disagree with this demand".
Another avenue of compromise might be for Google to argue again that it is
the responsibility of the original publisher to take down the material, especially
when that publisher is within the EU, although the CJEU has already
officially ruled that Google must remove the search entry.
One problem here is that privacy is a core right to Europeans while freedom
of speech is a core right to those in the US. Another is whether France
should have any jurisdiction over US websites (or over German, English or
Venezuelan websites, for that matter).
Google, acting solely as a company located in California, cannot be forced
by the Europeans to do anything; no US court would extradite Google
employees or enforce a foreign judgement. However, Google conducts business
throughout the world and they may choose not to endanger this. Google may
not want to risk seizure of google.fr.
A few examples of requests to be forgotten can be found at http://www.theguardian.com/commentisfree/2014/jul/02/eu-right-to-be-forgotten-guardian-google.
What do you think of this "right to be forgotten"?
Equustek v Datalink
The Canadian firm Equustek sued the company Datalink Technologies Gateways
over trademark infringement. As a result of the verdict, Google was asked to
remove all links to DTG; in 2012 they did remove links visible through its
Canadian portal google.ca. Google refused, however, to remove links from
google.com. In June 2014 it lost its first-round case.
On June 11, 2015, a three-judge appellate panel from British Columbia ruled
unanimously that Google must block the DTG links worldwide -- including,
again, via google.com.
To be sure, it does appear DTG has behaved badly here, and in some ways this
case resembles a DMCA takedown case, for which Google does remove
links worldwide. However, the trademark case was limited to Canada; in
general, intellectual-property cases may fare quite differently in different
jurisdictions. Equustek has apparently not alleged that Datalink
infringed on a patent, for example, meaning DTG should be able to
sell their products freely as long as the trademark dispute is addressed.
Worse, the court's legal reasoning was not that DTG's
behavior warranted worldwide takedown, but rather that worldwide takedown
was the only reasonable option to make DTG disappear in Canada.
The decision states that
The plaintiffs have established, in my view,
that an order limited to the google.ca search site would not be effective.
I am satisfied that there was a basis, here, for giving the injunction
The court did not completely ignore the free-speech question, but
interpreted it very narrowly, as pertaining to DTG's speech rather than
There has, in the course of argument, been
some reference to the possibility that the defendants (or others) might
wish to use their websites for legitimate free speech, rather than for
unlawfully marketing the GW1000 [the infringing device -- pld]. That
possibility, it seems to me, is entirely speculative. There is no evidence
that the websites in question have ever been used for lawful purposes....
The full decision is at https://s3.amazonaws.com/s3.documentcloud.org/documents/2096794/2015-bcca-265-equustek-solutions-inc-v-google-1.txt
The case is currently on appeal to the Supreme Court of Canada. A decision
is expected in summer 2017.
Illinois Eavesdropping Law
Should you be able to record the police in Illinois in public? Illinois law
used to make this illegal (actually, a felony):
(a) A person commits eavesdropping when he:
(1) Knowingly and intentionally uses an eavesdropping device for the
purpose of hearing or recording all or any part of any conversation or
intercepts, retains, or transcribes electronic communication unless he
(A) with the consent of all of the parties
to such conversation or electronic communication or
(B) in accordance with Article 108A or Article 108B of the "Code of
Criminal Procedure of 1963", approved August 14, 1963, as amended;
However, in a similar case in Massachusetts, which has a similar law, the
First Circuit Appellate Court upheld not only overturning the law, but that
the person doing the recording had a right to sue the officers for false
arrest (meaning, in effect, that the officers should have known the law was
From the decision at http://www.ca1.uscourts.gov/pdf.opinions/10-1764P-01A.pdf:
We conclude, based on the facts alleged,
that Glik was exercising clearly-established First Amendment rights in
filming the officers in a public space
On the other hand, Judge Richard Posner of the Seventh Circuit has spoken in
favor of the law [http://www.suntimes.com/news/7639298-418/judge-casts-doubt-on-aclu-challenge-to-law-forbidding-audio-recording-of-cops.html]:
"If you permit the audio recordings,
they'll be a lot more eavesdropping. ... There's going to be a lot of this
snooping around by reporters and bloggers," U.S. 7th Circuit Judge Richard
Posner said. "Yes, it's a bad thing. There is such a thing as privacy."
Still, on May 10, 2012, the Seventh Circuit issued an order banning
prosecution under the law in Cook County, and sent the case itself back to
the District Court. Judge Posner dissented. One of the points of his dissent
was that the Seventh Circuit's objections to the law were so broad that
nothing would forbid a third party
from making an audio recording of an arrest or other police interaction.
In September 2011, Crawford County judge David Frankland found the Illinois
law unconstitutional, for violating due process and criminalizing ordinary
behavior. On March 2, 2012, Cook County Circuit Court judge Stanley Sacks
also ruled that the law was unconstitutional, though again apparently not
because of the First Amendment. A later attempt to repeal the law failed in
the Illinois house.
In March 2014 the Illinois Supreme Court ruled that the law was
unconstitutional because it was "overbroad" and because of the
First-Amendment issue. From the decision:
We conclude ... that the ... eavesdropping
statute burdens substantially more speech than is necessary to serve a
legitimate state interest in protecting conversational privacy. Thus, it
does not survive immediate scrutiny. We hold that the recording provision
is unconstitutional on its face because a substantial number of its
applications violate the first amendment.
In December 2014 Illinois passed a revised law that criminalized recording
without consent when the parties had a legitimate expectation of privacy.
Imagine whether or not an anti-recording law should apply to the press.
Would that be consistent with the First Amendment [Congress shall make no
law ... abridging the freedom of speech, or
of the press]? Then the question becomes whether the press is
distinguishable from everyone else. See below on the Citizens
In June 2014, Carla Gericke reached a $57,000 settlement against the Weare,
NH police department. Gericke had videoed a police arrest of someone else,
was ordered at the time to stop, refused, and was arrested. She was released
the following day. More at http://arstechnica.com/tech-policy/2014/06/woman-charged-with-wiretapping-for-filming-cops-wins-57000-payout/.
As another example of the "who is the press" question, many states have
"shield laws" for the press regarding subpoenas of sources: the government
cannot subpoena reporters notes or the identity of sources except in very
unusual conditions. What constitutes the "press" here? Should an established
blogger qualify? What about a beginning blogger?
The Illinois shield law has the following definitions:
Sec. 8‑902. Definitions.
(a) "Reporter" means any person regularly engaged in the business of
collecting, writing or editing news for publication through a news
medium on a full‑time or part‑time basis . . . .
(b) "News medium" means any newspaper or other periodical issued at
regular intervals whether in print or electronic format and having a
general circulation; a news service whether in print or electronic
format; a radio station; a television station; a television network; a
community antenna television service; and any person or corporation
engaged in the making of news reels or other motion picture news for
This definition of a reporter would appear
to cover a "regular" blogger; there is no mention of employment or a
The Supreme Court's 2010 ruling in Citizens
v Federal Election Commission was widely reported as deciding
"corporations are people too". Actually, the ruling merely extends
free-speech rights to groups of
people (that is, corporations). The ruling also, however, made it clear that
there was no basis for singling out "media corporations" for
first-amendment protection (versus other corporations). In the present
circumstance, that would suggest that there is no way to distinguish between
public recording (or shielding of sources) by "the press" and public
recording or shielding by individuals; given that the former is
incontrovertibly protected, perhaps the latter is as well.
Finally, should newspapers be able to apply their journalism-shield laws to
anonymous online comments left regarding articles? See http://www.rcfp.org/newsitems/index.php?i=7086.
Here is the essential problem:
- an employee posts something critical at a site, "anonymously"
- the employer sues the site, claiming libel
- the site caves, and provides real identity of poster
- the suit is dropped, and the poster is FIRED.
This is a significant issue in the "free speech" of employees. Note how
giving providers an easy way to get libel cases dismissed via summary
judgement makes this strategy for corporations much more difficult.
Supposedly Apple employees are fired if they write about Apple online
In 2004, some bloggers announced new Apple rumors. In this case,
apparently the rumors were accurate, and involved inside information from
Apple employees. Apple sued, in the case Apple
v Does, for the identities of the insiders. Apple argued in court
that bloggers were not covered by the California shield law, and that even
if they were they must still divulge the identities of their contacts. The
trial court ruled in Apple's favor in 2005; the California Court of
Appeals reversed in 2006. From the 2006 decision:
... the discovery process is intended as a
device to facilitate adjudication, not
as an end in itself
. To accept Apple's position on the present
point would empower betrayed employers to clothe themselves with the
subpoena power merely by suing fictitious defendants, and then to use
that power solely to identify treacherous employees for purposes of
, all without any
intent of pursuing the underlying case to judgment
sympathy for employers in such a position cannot blind us to the gross
impropriety of using the courts and their powers of compulsory process as
a tool and adjunct of an employer's personnel department.
Note that the issue here is the use of the legal system to find
identities of anonymous posters. Baase has an entire section on anonymity
(4e §3.4 / 5e §3.5).
What about employee bloggers?
We do have the case of Dawnmarie Souza
who was fired from American Medical Response of Connecticut in 2009 after
commenting on Facebook about her work environment. The NLRB, however,
weighed in (much later, Feb 11 2011) with a ruling (non-final because the
case was settled, but putting heavy pressure on employers) that Souza's
speech was "concerted protected activity" under NLRA (National Labor
Relations Act) rules for discussion of work conditions with
other employees. Souza was discussing conditions on a private
Facebook page, and had friended at least some coworkers, and was not
blogging publicly. AMR's work rules apparently prohibited any
discussion of work conditions on the internet.
- They are free to speak
- Their employers are free to fire them
Souza was unionized, but the NLRA applies to nonunionized workers as well.
However, the exact scope of the recent NLRB opinion is unclear.
Is source code speech?
Well, is it? On the one hand it is an expressive medium; on the other,
source code has a functional quality absent from political
rants. You can compile it and it does something.
Cases where it's been debated:
- DMCA DRM circumvention (eg deCSS)
- 3D-printing of firearms (not quite in a conventional programming
Encryption was a BIG issue for the US government, 1977 - ~ 2000
For a while, the NSA (National Security Agency) tried very hard to block
even publication of scientific papers. They would issue "secrecy orders".
But eventually the government's weapon of choice was ITAR: International
Trade in Armaments Regulations
Suppose you make F-16 fighters. You need a munitions export permit to
sell these oversees. What about if you make open-source encryption
software? You need the same kind of permit! Even if you GIVE IT AWAY!!
BOOKS were exempt. The rule applied only to machine-readable forms. For
a while, there was a machine-readable T-shirt with the RSA encryption
algorithm on it.
Discussion: does it make any
sense to ban the online source code, if a book in which the same code is
printed can be freely distributed?
Phil Zimmermann released PGP ("Pretty Good Privacy") as an open-source
project in 1991. The government made him promise not to do it again.
Zimmermann's associates outside the US released the next version.
Zimmermann was under indictment for three years, but charges were
PGP later became a commercial software company, but not before aiding in
the creation of the OpenPGP standard (and allowing that use of the PGP
name). The open-source version is now GPG (Gnu Privacy Guard).
In 1994 Bruce Schneier wrote a textbook Applied
Cryptography. All the algorithms were printed, and also included
verbatim on a 3.5" floppy disk in the back of the book. Phil Karn (of
Karn's Algorithm for estimating packet RTT times) applied for an export
license for the package, also in 1994. It was granted for the book
(actually, the book needed no license), but denied for the floppy.
Discussion: does this make sense?
Some of Karn's notes are at http://www.ka9q.net/export.
Daniel Bernstein created a cipher called "snuffle". In 1995, while a
graduate student at UC Berkeley, he sued to be allowed to publish his
paper on snuffle and to post it to the internet. In 1997 the district
court ruled in his favor. In 1999 a 3-judge panel of the 9th circuit ruled
in his favor, although more narrowly. Opinion of Judge Betty Fletcher:
Prior-restraint was one issue
Bernstein's right to speak is the issue, not foreigners' right to hear
But does source code qualify? see p 4232: C for-loop; 4233: LISP
Snuffle was also intended, in part, as political expression. Bernstein
discovered that the ITAR regulations controlled encryption exports, but
not one-way hash functions such as MD5 and SHA-1. Because he believed that
an encryption system could easily be fashioned from any of a number of
publicly-available one-way hash functions, he viewed the distinction made
by the ITAR regulations as absurd. To illustrate his point, Bernstein
developed Snuffle, which is an encryption system built around a one-way
hash function. (Arguably, that would now make Snuffle political
speech, generally subject to the fewest restrictions!)
Here is Judge Fletcher's main point:
source code to express their scientific ideas in much the same
way that mathematicians use equations or economists use graphs. Of course,
both mathematical equations and graphs are used in other fields for many
purposes, not all of which are expressive. But mathematicians and
economists have adopted these modes of expression in order to facilitate
the precise and rigorous expression of complex scientific ideas.13
Similarly, the undisputed record here
makes it clear that cryptographers utilize source code in the same
Government argument: ok, source code might be expressive, but you can
also run it and then it does
something: it has "direct functionality"
Fletcher: source code is
meant, in part, for reading. More importantly, the idea that it can be
banned due to its "direct functionality" is a problem: what if a computer
could be ordered to do something with spoken commands? Would that make
speech subject to restraint? In some sense absolutely
yes; if speech became action then it would be, well, actionable
(that is, something that could be legally prohibited).
In 1999, the full 9th Circuit agreed to hear the case; it was widely
expected to make it to the Supreme Court.
But it did not. The government dropped the case.
The government also changed the ITAR rules regarding cryptography.
Despite these changes, Bernstein continued to appeal. In 2003 a judge
dismissed the case until such time as the government made a "concrete
Junger v Daley
Peter Junger was prof at Case Western Reserve University. He wanted to
teach a crypto course, with foreign students.
The issue of whether or not the First
Amendment protects encryption source code is a difficult one because
source code has both an expressive feature and a functional feature.
The district court concluded that the functional characteristics of
source code overshadow its simultaneously expressive nature. The fact that
a medium of expression has a functional capacity should not preclude
Because computer source code is an expressive
means for the exchange of information and ideas about computer
programming, we hold that it is protected by the First Amendment.
BUT: there's still a recognition of the need for balancing:
We recognize that national security
interests can outweigh the interests of protected speech and require the
regulation of speech. In the present case, the record does not resolve
whether ... national security interests should overrule the interests in
allowing the free exchange of encryption source code.
Apple and the FBI
The FBI wanted Apple to prepare a special version of iOS to enable
brute-force guessing of the iPhone passcode. Normally, trying all 4-digit or
even 6-digit combinations would be very fast, but the existing iOS software
introduces an artificial delay of up to 1 hour.
One of Apple's more promising arguments has to do with free speech: the FBI
is asking Apple first to write some code, and then sign
that code, giving it the imprimatur of Apple approval. But longstanding
free-speech law states that the government cannot force you to say a
particular thing, and certainly cannot force you to put your name on it. You
have, in other words, the right not to speak.
And, as was made reasonably clear by the Bernstein and Junger cases, source
code is a form of speech.
Apple's primary argument was that the so-called "All
Writs" act, requiring third parties to cooperate with federal requests for
data access, was being misapplied here. Note, for example, that the FBI was
asking Apple to produce a special version of iOS. That's a major
undertaking, and not in the same class at all as asking them to provide
access to someone's iCloud files (which Apple does do).
Restrictions on Circumvention Speech
The DMCA also has a speech restriction:
(1) No person shall manufacture, import, offer to the public, provide, or
otherwise traffic in any technology, product, service, device, component,
or part thereof, that— (A) is primarily designed or produced for the
purpose of circumventing protection afforded by a technological
If you write an online or print article about how to bypass copy-protection,
you may be violating this.
There are several; the best known is Universal
Studios v Reimerdes, Corley, and Kazan. Eric Corley, aka Emmanuel
Goldstein, is the publisher of 2600
magazine. In 2000 the magazine included an article about a new program,
"deCSS", that removed the CSS ("content-scrambling system") encryption
from DVDs, thus allowing them to be copied to hard disks, converted to new
formats, and played on linux systems.
DeCSS was developed in ~1999, supposedly by Jon Lech Johansen. He wrote
it with others; it was released in 1999 when Johansen was ~16. He was
tried in Norway in 2002, and was acquitted.
Cute story about Jon: In 2005, supposedly Sony
stole some of his GPL-covered code for their XCP "rootkit" project. Jon
might have been able to sue for huge damages (though the usual
RIAA-lawsuit standard is based on statutory damages per item copied, and
here Sony might argue only one thing was copied). More at http://news.slashdot.org/story/05/11/17/1350209/dvd-jons-code-in-sony-rootkit
Jon Lech Johanson
Judge Kaplan memorandum, Feb 2000, in Universal v Reimerdes:
As a preliminary matter, it is far from clear
that DeCSS is speech protected by the First Amendment. In material
respects, it is merely a set of instructions that controls computers.
He then goes on to consider the "balancing" approach between free speech
and regulation, considering the rationale for the regulation and the
relative weights of each side.
The computer code at issue in this case does
little to serve these goals [of expressiveness]. Although this Court has
assumed that DeCSS has at least some
expressive content, the expressive aspect appears to be minimal when
compared to its functional component. Computer code primarily is
a set of instructions which, when read by the computer, cause it to
function in a particular way, in this case, to render intelligible a data
file on a DVD. It arguably "is best treated as a virtual machine . . . ."
[the decision cites Lemley & Volokh, Freedom of Speech and Injunctions in
Intellectual Property Cases, Duke Law Journal 1998. However, the
sentence in Lemley and Volokh's paper explicitly refers to executable object code, not source! "The
Bernstein court's conclusion, even if upheld, probably doesn't extend past
source code to object code,
however. We think most executable software is best treated as a virtual
machine rather than as protected expression." Judge Kaplan apparently did
not grasp the distinction, though, to be fair, the above quote appeared
only in the initial memorandum, and not in the final decision.]
Note that this virtual-machine argument renders irrelevant the Bernstein
precedent! Actually, the virtual-machine argument pretty much presupposes
that you have come down solidly on the side of code-as-function instead of
Also note the weighing of expression versus functionality, with the
former found wanting.
As for the free-speech issue, the final decision contains the following
It cannot seriously be argued that any form of
computer code may be regulated without reference to First Amendment
doctrine. The path from idea to human language to source code to object
code is a continuum.
The "principal inquiry in determining content
neutrality ... is whether the government has adopted a regulation of
speech because of [agreement or] disagreement with the message it
conveys." The computer code at issue in this case, however,
does more than express the programmers' concepts. It does more,
in other words, than convey a message. DeCSS, like any other
computer program, is a series of instructions that causes a computer to
perform a particular sequence of tasks which, in the aggregate, decrypt
CSS-protected files. Thus, it has a distinctly functional,
non-speech aspect in addition to reflecting the thoughts of the
What do you think of this idea that the DMCA is "non-content-based"
regulation? What would you say if someone claimed deCSS was intended to
express the view that copyright was an unenforceable doctrine?
Do you think that Judge Kaplan was stricter here than in the crypto cases
because crypto was seen as more "legitimate", and deCSS was clearly
intended to bypass anticircumvention measures?
The district court issued a preliminary injunction banning 2600.com from
hosting deCSS; the site then simply included links to other sites carrying
it. The final injunction also banned linking to such sites. Furthermore,
the decision included language that equated linking with
Universal v Reimerdes, Appellate Court
The Appellate decision was similar to Judge Kaplan's District Court
opinion, though with somewhat more on the constitutional issues, and an
additional twist on linking. Also, note that one of Corley's defenses was
that he was a journalist, and
Writing about DeCSS without including the
DeCSS code would have been, to Corley, "analogous to printing a story
about a picture and not printing the picture."
However, in full context, that idea was harder to support. Corley's
mistake was in describing DeCSS as a
way to get free movies. What if he had
stuck to the just-the-facts approach, and described exactly how easy it
was to copy DVDs without actually urging you to do it? Is this similar to
the theoretical "Grokster" workaround?
Both the DC and Appellate courts held that the DMCA targets only the
"functional component" of computer speech.
One argument was that the CSS encryption makes Fair Use impossible, and
that therefore the relevant section of the DMCA should be struck down. The
appellate court, however, ruled instead that "Subsection 1201(c)(1)
ensures that the DMCA is not read to prohibit the 'fair use' of
information just because that
information was obtained in a manner made illegal by the DMCA".
Subsection 1201(c)(1) reads
Rights, Etc., Not Affected. — (1) Nothing in this section
shall affect rights, remedies, limitations, or defenses to copyright
infringement, including fair use, under this title.
In other words, while the DMCA can make Fair Use impossible, it remains
an affirmative defense against charges of infringement. This is an
interesting argument by the court! Literally it is correct, but the practical problems with Fair Use
access go unaddressed.
There is also, though, another issue: Corley was not being charged with
infringement. Literally speaking, Fair Use is not an affirmative defense
against charges of violating the DMCA anticircumvention issue.
Some notes on the free-speech argument:
Communication does not lose constitutional
protection as "speech" simply because it is expressed in the language of
computer code. Mathematical formulae and musical scores are written in
"code," i.e., symbolic notations not comprehensible to the uninitiated,
and yet both are covered by the First Amendment.
The court also acknowledged Junger v Daley (above).
As the District Court recognized, the scope
of protection for speech generally depends on whether the restriction is
imposed because of the content of
the speech. Content-based restrictions are permissible only if they serve
compelling state interests and do so by the least restrictive means
A content-neutral restriction is
permissible if it serves a
substantial governmental interest, the interest is unrelated to the
suppression of free expression, and the regulation is narrowly tailored,
which "in this context requires . . . that the means chosen do not 'burden
substantially more speech than is necessary to further the government's
That is, the DeCSS code may be said to be "expressive speech", but it is
not being banned because of what it expresses.
The Appellants vigorously reject the idea
that computer code can be regulated according to any different standard
than that applicable to pure speech, i.e.,
speech that lacks a nonspeech component. Although recognizing that code is
a series of instructions to a computer, they argue that code is
no different, for First Amendment purposes, than blueprints that
instruct an engineer or recipes that instruct a cook. See
Supplemental Brief for Appellants at 2, 3. We disagree.
Unlike a blueprint or a recipe, which cannot yield any functional result
without human comprehension of its content, human decision-making, and
human action, computer code can instantly cause a computer to accomplish
tasks.... These realities of what code
is and what its normal functions are require a First Amendment analysis
that treats code as combining nonspeech and speech elements, i.e.,
functional and expressive elements.
As for hyperlinks (in the section "Linking"),
a hyperlink has both a speech and a
nonspeech component. It conveys information, the Internet address of the
linked web page, and has the functional capacity to bring the content of
the linked web page to the user's computer screen.... The linking
prohibition is justified solely by the functional capability of the
What if one simply printed the site name, without
the link: eg cs.luc.edu? For links, one can argue that the expressive and
functional elements -- what the other site is, and how to get there -- are
The non-linking rule may become more of an issue as time goes on and the
US attempts to remove from the DNS system sites which provide illegal
access to copyrighted material. In the future, identifying a new IP
address for, say, the now-seized megaupload.com may be suspicious.
Ironically, nobody uses deCSS any more. You can get the same effect with
Videolan's VLC player,
originally offered as a Linux DVD player but now also popular for Windows
Furthermore, as of Windows 8, Microsoft will no longer supply a free DVD
player with windows. So VLC is it.
How did this come about? MS, after all, introduced protected
processes into Windows 7, under pressure from the media
industry, specifically to prevent attaching debuggers to read things like
embedded CSS decryption keys.
The MS issue turns out to be the MPEG-2 patent-licensing fee. It's $2.00
per device, according to http://www.mpegla.com/main/programs/M2/Pages/Agreement.aspx:
(1) For MPEG-2 Decoding
Products in hardware or software (such as those found in set-top boxes,
DVD players and computers equipped with MPEG-2 decode units), the
royalty is US $2.50 from January 1, 2002 and $4.00/unit before January
1, 2002 (Sections 2.1 and 3.1.1), but $2.00 under the
new extended License from the later of January 1, 2010 or execution
Videolan is based in France. They state on their legal page (http://www.videolan.org/legal.html):
Neither French law nor European conventions
recognize software as patentable (see French section below).
Therefore, software patents licenses do not apply on VideoLAN software.
MS doesn't want to add $2.00 per license just for a DVD player, especially
as this would then have to be paid for by business users who really do not
But what about deCSS? This has evolved into libdvdcss.lib. There is no
patent issue here, apparently, but the DMCA anti-circumvention rules
theoretically criminalize its distribution. The Videolan site states
libdvdcss is a library that can find and guess
keys from a DVD in order to decrypt it.
Vive la France!
This method is authorized by a French law decision CE
10e et 9e soussect., 16 juillet 2008, n° 301843 on
Gallery of DeCSS: http://www.cs.cmu.edu/~dst/DeCSS/Gallery
Check out these in particular:
Does the entire gallery serve to establish an expressive purpose?
Lower down appears some correspondence between Touretzky and the MPAA.
At the beginning, in the third paragraph, Touretzky states that the purpose
of the page is to "point out the absurdity of Judge Kaplan's position that
source code can be legally differentiated from other forms of written
expression". Is Judge Kaplan's position absurd?
One argument is that computer scientists do use source code as expressive
speech, but mostly in fragments. That is, a textbook on
encryption would need to include only a few crucial routines, and then
perhaps only in pseudocode form. Code is functional, by comparison, only
when it's the entire thing, complete with error checking and Makefile. By
this standard, the mostly-fragmentary Touretzky gallery examples are indeed
expressive, but not really functional.
Richard Stallman has argued, convincingly, that programmers do sometimes
read entire code libraries as expressive speech. Or at least that he himself
does. But he is an exceptional case.
A more widespread problem with the "expressive fragments" idea is that Alice
could publish just the deCSS algorithm, as a function, and Bob could publish
a general framework for processing video files. Alice's code meets the
"expressive fragment" test, as it is not functional by itself, and Bob's
code is not controversial at all. But paste the two together, and start
What keys can I tell you to press?
In 2005 Sony introduced their "XCP" copy-protection system for CDs,
developed by SunnComm. If you inserted a protected audio CD into a Windows
computer, a small digital filesystem would mount and install a driver that
was supposed to block copying the music tracks. The system used the Windows
"auto-run" feature to achieve this without user involvement. Auto-run was
always a security risk, and should be (and now I believe is)
disabled on CDs and DVDs. It can also be disabled on a per-mount basis by
holding down the "shift" key while inserting the media.
In 2003, John "Alex" Halderman, a student at Princeton of Edward Felten,
discovered an earlier version of this software on a BMG release by Velvet
Underground. His paper, dated Oct 6, 2003, is here.
most users who would be affected can bypass
the system entirely by holding the shift key every time they insert the
Supposedly Halderman said something like, "they can't sue me for telling
people to hold down the shift key, can they?"
Three days after the release of his paper, SunnComm did indeed file suit
against Halderman; they also sued him for damaging the company's reputation.
The following month, though, a federal judge dismissed the suit.
Did Halderman "offer to the public ... any technology ... that— (A) is
primarily ... for the purpose of circumventing [copy] protection"? Does
telling someone how to circumvent copy protection count? What about telling
someone how to use standard software to accomplish this? What about
providing open-source software to do this? What about for-sale software,
sold only outside the US?
While it's not exactly source code in the sense of a human-readable
programming language, you can now get 3D-printer files to produce a handgun.
The plans were developed by Cody
Wilson, who named the gun the Liberator.
Such files might certainly be in XML format, making them "sort of"
human-readable. These files in effect represent a "program" for producing
The US government has demanded that the files be taken down from the defcad.com
site (owned by Defense Distributed) that originally hosted them, under the
same ITAR regulations that were formerly applied to encryption software. But
the files are still widely available.
In May 2015, Defense Distributed filed a lawsuit
against the State Department, arguing that their First Amendment rights were
violated by the ITAR takedown order.
The Liberator is made of plastic. As such, there is a distinct possibility
of catastrophic failure. Wilson has another gun-making project called ghost
gunner. It involves a $1200 CNC
milling machine that can complete the manufacture of an AR-15 rifle (the
military version, with full-automatic operation, is known as the M-16).
The US Government has designated the lower
receiver as the central part of the AR-15 rifle: all other
AR-15 parts can be purchased without serial numbers and without
identification or permit. The lower receiver, in turn, isn't considered an
"official" AR-15 lower receiver unless it is more than 80% completed. The
ghost-gunner project involves buying the CNC milling machine, a so-called
"80% lower receiver", that is, 80% finished, and the other necessary parts
for an AR-15. One then uses the milling machine to complete the lower
receiver, and then assembles everything.
The lower receiver can also be 3D-printed. As the barrel, where the actual
firing takes place, is steel, a polymer lower-receiver is not a safety
hazard. See https://www.ar15s.com/wp-content/uploads/2014/10/Tungsten-Silver-80-AR-15-Lower-Receiver2-1024x637.jpg.
Is this still speech?