Congress shall make no law ... abridging the
freedom of speech, or of the press;
Right off the bat note the implicit distinction between "speech" and "the press": blogging wasn't foreseen by the Founding Fathers!
For that matter, the Founding Fathers did not foresee "functional" information: computer files, for example, that are in some ways speech-like, but which have significant consequences when executed or otherwise processed.
While in general laws tend towards a utilitarian justification, the right
of free speech is seen as, while not absolute, still pretty fundamental.
Specifically, speech may be restricted only if doing so is the least
restrictive means of accomplishing the desired end. In this
sense, freedom of speech under the US constitution can be seen as a
fundamental duty of the
government, more akin to deontological reasoning.
The courts have held that Congress can
abridge "offensive" speech. Here are a few examples (some of which no
There is also harassing speech, including "cyber-bullying" and stalking.
All these categories are of speech that once upon a time was regulated, but that once upon a time private individuals seldom if ever got caught up in.
Baase 4e p 152 / 5e p 158: the Commodity-Futures Trading Commission
(CFTC) required that, if you wrote about commodity futures, you needed a
license. The regulations were originally intended to cover traders, but
CFTC applied them to newsletters too, and then the web. (These latter
rules were deleted in 2000.)
New York State outlawed not only the direct sale
of wine from out-of-state-wineries to New Yorkers, but also the advertising.
What about web pages?
Political campaign-finance laws. Anything you do that is "coordinated" with a political campaign is considered to be a contribution. These are subject to limitations, and to reporting requirements.
Under the original terms of the McCain-Feingold act, you could not even include a candidate's name or face in a newspaper article within 60 days of an election.
In 2004, the Federal Election Commission was ordered by a judge to write rules extending the McCain-Feingold rules to the Internet. How would this affect bloggers? Would they be silenced? Note that the opposing candidates are VERY likely to file complaints.
The FEC issued these new Internet rules in 2006, deciding that blogging about candidates was ok as long as you weren't paid, even if the blogging was "in coordination with" the candidate.
2007: The Supreme court struck down the McCain-Feingold restriction on
2010: Supreme Court struck down most restrictions on corporate speech (the Citizens United case)
2014: Supreme Court struck down most campaign-donation caps on First
Amendment grounds (McCutcheon
Home selling: if you list your house online, do you need a real-estate
Sexual material , including pornography
(though that is a pejorative term) has been regulated for a long time.
Miller v California, Supreme Court 1973 (unrelated to US v Miller 1976):
this case established a three-part guideline for determining when
something was legally obscene (as opposed to merely "indecent"):
For the internet, community standards
is the problem: what community?
This is in fact a huge problem,
though it was already a problem with mail-order.
As the Internet became more popular with "ordinary" users, there was mounting concern that it was not "child-friendly". This led to the Communications Decency Act (CDA), next.
In 1996 Congress passed the Communications Decency Act (CDA) (Baase 4e p 141 / 5e p 146). It was extremely broad.
From the CDA:
On the internet, you cannot tell how old someone is.
Butler v Michigan, 1957: SCOTUS struck down a law making it illegal to sell material (pornography) in Michigan solely because it might be harmful to minors.
The CDA was widely viewed as an attempt by Congress to curry favor with a "Concerned Public", but knowing full well it was unlikely to withstand court scrutiny.
It did not. The Supreme Court ruled unanimously in 1997 that the
censorship provisions were not ok: they were too vague and did not use the
"least-restrictive means" available to achieve the desired goal.
The Child Online Protection Act (COPA) was passed in 1998: This still
stuck bewith the "community standards" rule. The law also authorized the
creation of a commission; this was the agency that later wanted some of
Google's query data. The bulk of COPA was struck down.
The Children's Internet Protection Act (CIPA) was passed in 2000 (Baase, 4e p 142 / 5e p 147) Schools that want federal funding have to install filters. So do public libraries; however, libraries must honor patron requests to turn the filter off.
SCOTUS upheld CIPA in 2003.
The Chicago Public Library gave up on filters, but did install screen covers that make it very hard for someone to see what's on your screen. This both protects patron privacy AND protects library staff from what might otherwise be a "hostile work environment".
Baase has more on the library situation, 4e p 143/ 5e p 147
Filters are sort of a joke, though they've gotten better. However, they CANNOT do what they claim. They pretty much have to block translation sites and all "personal" sites, as those can be used for redirection; note that many sites of prospective congressional candidates are of this type. See peacefire.org. And stupidcensorship.com. And http://mousematrix.com/.
This is merely at the technical level. A more insidious problem with filtering is a frequent conservative bias. Peacefire documents an incident in 2000 where a web page with quotes about homosexuality was blocked. All the quotes, however, came from the websites of conservative anti-homosexuality sites, which were not blocked. At a minimum, one can expect that (as with the MPAA) the blocking threshold for information on homosexual sexuality will be lower than for heterosexual sexuality. (To be fair, some conservative sites, such as some sites relating to the Second Amendment, have also had trouble with blocks.)
Demo: use http://mousematrix.com/ to get to thepiratebay.org. (Or the Tor browser, but that's harder to set up.)
To a degree, problems with over-filtering in K-12 schools are
not serious: student use of computers is fundamentally intended to support
their education, and if a site needed for coursework is blocked then it is
One piece of the CDA survived: §230:
No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider. [Wikipedia]
Why is this there?
Note that there is no limit of §230 to any particular area of law, eg
libel. (Actually, there are
limits if the issue is copyright law, or criminal law.)
Note also that §230 addresses "publisher" liability and "author"
liability. Another form, not
exempted, is "distributor" liability.
The actual law is here: law.cornell.edu/uscode/text/47/230.
Note in particular the exemption sections (e)(1) and (e)(2). Note also
that section 230 is titled "Protection for private blocking and
screening of offensive material".
District court only, New York State (Does anyone remember CompuServe?) Giant pre-Internet BBS available to paid subscribers. The "rumorville" section, part of the Journalism Forum, was run by an independent company, Don Fitzpatrick Associates. Their contract guaranteed DFA had "total responsibility for the contents". Rumorville was in essence an online newspaper; essentially it was an expanded gossip column about the journalism industry. I have no idea who paid whom for the right to be present on CompuServe.
In 1990, Cubby Inc and Robert Blanchard planned to start a competing online product, Skuttlebut. This was disparaged in Rumorville. Cubby et al sued DFA & CompuServe for libel.
CompuServe argued they were only a distributor. The judge agreed that this meant they escaped liability, and granted summary judgment in their favor. The court ruled that they had no control at all over content. They are like a bookstore, or a distributor.
While CompuServe may decline to carry a given publication altogether, in reality, once it does decide to carry a publication, it will have little or no editorial control over that publication's contents. This is especially so when CompuServe carries the publication as part of a forum that is managed by a company unrelated to CompuServe.
CompuServe has no more editorial control over such a publication than does a public library, book store, or newsstand, and it would be no more feasible for CompuServe to examine every publication it carries for potentially defamatory statements than it would be for any other distributor to do so.
It was and is generally accepted that distributors have no liability for content (unless it can be proven that they encouraged the content).
(we'll come back to "distributor liability" later.)
Stratton Oakmont v Prodigy: New York state court, 1995. On a financial matters forum called "Money Talk," a Prodigy user (never identified) posted about Daniel Porush, the president of Stratton Oakmont, a financial services company. The remarks called Porush a "soon to be proven criminal" and that Stratton Oakmont was a "cult of brokers who either lie for a living or get fired"
Prodigy claimed the Compuserve defense in their motion for summary judgment.
Prodigy lost, because they promised to monitor for bad behavior on the board. At the very least, they CLAIMED to the public that they reserved the right to edit or remove messages. This was in fact part of Prodigy's family-oriented marketing. Prodigy was trying to do "family values" editing (including the deletion of profanity), and it cost them.
In legal terms, Prodigy was held to "publisher liability" rather than the weaker "distributor liability" because they CLAIMED to exercise editorial judgment.
Prodigy did have some internal confusion about whether they were for the "free expression of ideas" or were "family safe"
Prodigy's policy was to ban individual attacks, but not group attacks; anti-semitic rants did appear and were not taken down.
After Prodigy lost their motion for summary judgment, the case was
settled; Prodigy issued a public apology. In Wall
Street versus America by Gary Weiss, the claim is made that the
settlement did not involve the exchange of money. See http://books.google.com/books?id=iOhGkYqaEdwC&pg=PA213&lpg=PA213&dq=wall+street+versus+america+porush&source=b...t=result,
page 215: "No money changed hands. No money had to change hands."
Weiss also points out that four years later
Enter the CDA. §230 was intended to encourage
family-values editing, because after the Stratton Oakmont case most
providers were afraid to step in.
Whether this was specifically to encourage providers to remove profanity & obscenity, the nominal targets of the CDA, or whether it was just a compensatory free-speech-positive clause in an overall free-speech- very-negative law is not clear.
Most of Congress apparently did not expect the CDA to withstand judicial scrutiny.
Congressional documents suggest that fixing the Stratton Oakmont precedent was the primary purpose of §230. However, arguably the reason for fixing Stratton Oakmont was to protect ISPs and websites that did try to provide a "family-friendly" environment.
Smith developed the theory that the paintings were artwork stolen by the Nazis and inherited by Batzel.
Smith had a dispute with Batzel [either about payments for work, or about
Batzel's refusal to use her Hollywood contacts to help Smith sell his
movie script]. It is not clear to what extent this dispute influenced
Smith's artwork theory.
Smith sent his allegations about Batzel in an email to Ton Cremers, who ran a stolen-art mailing list. Smith found Cremers through a search engine. This is still 1999.
Smith claimed in his email that some of Batzel's paintings were likely stolen by the Nazis. (p 8432 of the decision, Absolute Page 5)
Smith sent the email to email@example.com
Cremers ran a moderated listserv specializing in this. He included Smith's email in his next release. Cremers exercised editorial control both by deciding inclusion and also by editing the text as necessary.
He included a note that the FBI had been notified.
Normal address for Cremer's list was: firstname.lastname@example.org
Smith's emailed reply to someone when he found out he was on the list:
I [was] trying to figure out how in blazes I could have posted me [sic] email to [the Network] bulletin board. I came into MSN through the back door, directed by a search engine, and never got the big picture. I don't remember reading anything about a message board either so I am a bit confused over how it could happen. Every message board to which I have ever subscribed required application, a password, and/or registration, and the instructions explained this is necessary to keep out the advertisers, cranks, and bumbling idiots like me.
Some months later, Batzel found out and contacted Cremers, who contacted Smith, who continued to claim that what he said was true. However, he did say that he had not intended his message for posting.
On hearing that, Cremers did apologize to Smith.
Batzel disputed having any familial relationship to any Nazis, and stated the artwork was not inherited.
Batzel sued in California state court:
Cremers filed in Federal District Court for:
He lost on all three counts. (Should he have? We'll return to the
jurisdiction one later. Jurisdiction is a huge issue in libel law!). The
district court judge ruled that Cremers was not an ISP and so could not
claim §230 immunity.
Cremers then appealed the federal issues (anti-SLAPP, jurisdiction, §230)
to the Ninth Circuit, which simply ruled that §230 meant Batzel had no
case. (Well, there was one factual determination left for the District
Court, which then ruled on that point in Cremers' favor.)
This was the §230 case that set the (famous) precedent. This is a
major case in which both Congress and the courts purport to "get it" about
the internet. But note that there was a steady evolution:
IS Cremers like an ISP here? The
fact that he is editing the list he sends out sure gives him an active
role, and yet it was Prodigy's active-editing role that the CDA §230 was
arguably intended to protect.
Cremers is an individual, of course, while Prodigy was a huge
corporation. Did Congress mean to give special protections to corporations
but not individuals?
Cremers was interested in the content on his list, but he did not create
much if any of it.
Prodigy was interested in editing to create "family friendliness".
Cremers edited basically to tighten up the reports that came in.
Why does Communications Decency Act have such a strong free-speech component? Generally free speech is something the indecent are in favor of.
The appellate case was heard by the Ninth Circuit (Federal Appellate
court in CA, other western states); a copy is at BatzelvCremers.pdf.
numbers in the sequal are as_printed/relative).
[Opening (8431/4)] There is no reason inherent in the technological features of cyberspace why First Amendment and defamation law should apply differently in cyberspace than in the brick and mortar world. Congress, however, has chosen for policy reasons to immunize from liability for defamatory or obscene speech "providers and users of interactive computer services" when the defamatory or obscene material is "provided" by someone else.
Note the up-front recognition that this is due to Congress.
Section 230 was first offered as an amendment by Representatives Christopher Cox (R-Cal.) and Ron Wyden (D-Ore.). (8442/15)
Congress made this legislative choice for two primary reasons. First, Congress wanted to encourage the unfettered and unregulated development of free speech on the Internet, and to promote the development of e-commerce. (8443/16) ...
(Top of 8445/18) The second reason for enacting § 230(c) was to encourage interactive computer services and users of such services to self-police the Internet for obscenity and other offensive material
[extensive references to congressional record]
(8447/20): In particular, Congress adopted § 230(c) to overrule the decision of a New York state court in Stratton Oakmont, 1995
Regarding question of why a pro-free-speech clause was included in an anti-free-speech law (or, more precisely, addressing the suggestion that §230 shouldn't be interpreted as broadly pro-free-speech simply because the overall law was anti-free-speech):
(8445/18, end of 1st paragraph): Tension within statutes is often not a defect but an indication that the legislature was doing its job.
The District court limited this to ISPs [what are they?]. The Circuit
court argued that (a) Cremers was
a provider of a computer service, and (b) that didn't matter because he
was unquestionably a user.
But could user have been
intended to mean one of the army of Prodigy volunteers who kept lookout
for inappropriate content? It would do no good to indemnify Prodigy the
corporation if liability then simply fell on the volunteer administrators
of Prodigy's editing system. Why would §230 simply say "or user" when what
was meant was a specific user who was distributing content?
8450/23, at  Critically, however, § 230 limits immunity to information "provided by another information content provider."
Here's one question: was Smith
"another content provider"? You can link and host all you want, provided
others have created the material for
online use. But if Smith wasn't a content provider, then Cremers
becomes the originator.
The other question is whether Cremers was in fact partly the "provider",
by virtue of his editing. Note, though, that the
whole point of §230 is to allow (family-friendly) editing. So
clearly a little editing cannot be enough to void the immunity.
Here's the Ninth Circuit's answer to whether Cremers was the content
provider [emphasis added]:
8450/23, 3rd paragraph: Obviously, Cremers did not create Smith's e-mail. Smith composed the e-mail entirely on his own. Nor do Cremers's minor alterations of Smith's e-mail prior to its posting or his choice to publish the e-mail (while rejecting other e-mails for inclusion in the listserv) rise to the level of "development."
More generally, the idea here is that there is simply no way to extend immunity to Stratton-Oakmont-type editing, or to removing profanity, while failing to extend immunity "all the way".
Is that actually
The Court considers some other partial interpretations of §230, but finds they are unworkable.
8584/27, 3rd paragraph Smith's confusion,
even if legitimate, does not matter, Cremers maintains, because the
§230(c)(1) immunity should be available simply
because Smith was the author of the e-mail, without more. We disagree. Under Cremers's broad
interpretation of §230(c), users and providers of interactive computer
services could with impunity intentionally post material they knew was
never meant to be put on the Internet. At the same time, the creator or
developer of the information presumably could not be held liable for
unforeseeable publication of his material to huge numbers of people with
whom he had no intention to communicate. The result would be nearly
limitless immunity for speech never meant to be broadcast over the
Internet. [emphasis added]
The case was sent back to district court to determine this point (which
it did, in Cremer's favor).
8457/30, at  We therefore ... remand to the district court for further proceedings to develop the facts under this newly announced standard and to evaluate what Cremers should have reasonably concluded at the time he received Smith's e-mail. If Cremers should have reasonably concluded, for example, that because Smith's e-mail arrived via a different e-mail address it was not provided to him for possible posting on the listserv, then Cremers cannot take advantage of the §230(c) immunities.
Judge Gould partial dissent in Batzel v Cremers:
The majority gives the phrase "information provided by another" an incorrect and unworkable meaning that extends CDA immunity far beyond what Congress intended.
(1) the defendant must be a provider or user of an "interactive computer service"; (2) the asserted claims must treat the defendant as a publisher or speaker of information; and (3) the challenged communication must be "information provided by another information content provider."2 The majority and I agree on the importance of the CDA and on the proper interpretation of the first and second elements. We disagree only over the third element.3
Majority: part (3) is met if the defendant believes this was the
author's intention. Gould: This
is convoluted! Why does the author's intention
Below, when we get to threatening speech, we will see that the issue
there is not the author's
intention so much as a reasonable recipient's
The problems caused by the majority's rule
would all vanish if we focused our inquiry not on the author's [Smith's]
intent, but on the defendant's [Cremers'] acts
[pld: emphasis added here and in sequel]
So far so good. But then Gould shifts direction radically:
We should hold that the CDA immunizes a
defendant only when the defendant took no
active role in selecting the questionable information for publication.
How does this help Prodigy with family-friendly editing or Stratton-Oakmont non-editing? Why not interpret (3) so the defendant is immunized if the author did intend publication on internet? Though this interpretation wouldn't have much impact on later §230 cases; it is almost always the case that the author did intent internet publication.
Can you interpret §230 so as to (a) restrict protection to cases when
there was no active role in selection, and (b) solve the Stratton Oakmont
Gould: A person's decision to select particular information for distribution on the Internet changes that information in a subtle but important way: it adds the person's imprimatur to it
No doubt about that part. But Congress said that chat rooms, discussion boards, and listservs do have special needs.
And why then add the "and users" lanuage to the bill? These aren't users.
Gould: If Cremers made a mistake, we should
not hold that he may escape all accountability just because he made that
mistake on the Internet.
Should this liability be there, in light of §230? Does §230 mean that a
company cannot be found liable as publisher or speaker for email created
Since this case, there have been MANY others decided by application of this decision. See eff.org's section on Free Speech, http://www.eff.org/issues/free-speech.
There have also been many attacks on §230 immunity. The 2015 SAVE act is one legislative approach. The 2018 FOSTA/SESTA proposal is another, along the same lines. Other limitations may come, someday.
Publisher liability (except when eliminated by §230) exists even without knowledge of defamatory material's inclusion.
Distributor liability is not exempted by §230. It is liability for knowingly distributing defamatory
material. However, in Zeran v AOL (below), the courts found that prior
notice doesn't automatically make for distributor liability.
Currently, the most likely approaches to attacking §230 immunity seem to be to claim distributor liability, or to claim that the hosting site actively contributed to the defamation or actively encouraged defamatory material.
There have been attacks on the §230 defense, but courts have been unwilling to date to allow exceptions, or to restrict coverage to "traditional ISPs" where there is zero role in selection of the other material being republished.
There is still some question though about what happens if you do actively select the material. Cremers played a very limited editorial role. What if you go looking for criticism of someone and simply quote all that? And what if you're a respected blogger and the original sources were just Usenet bigmouths?
EFF: One court has limited §230 immunity to situations in which the originator "furnished it to the provider or user under circumstances in which a reasonable person...would conclude that the information was provided for publication on the Internet...."
Be wary, too, of editing that changes the meaning. Simply deleting some statements that you thought were irrelevant but which the plaintiff thought were mitigating could get you in trouble!
This was a §230 case that expanded the rules to include at least some
distributor liability. The ruling was by the Fourth Circuit.
Someone posted a fake ad for T-shirts with tasteless slogans related to the Oklahoma City bombing, listing Kenneth Zeran's home number. Zeran had nothing to do with the post (although it is not clear whether the actual poster used Zeran's phone intentionally). For a while Zeran was getting hostile, threatening phone calls at the rate of 30 per hour.
Zeran lost his initial lawsuit against AOL.
Zeran appealed to the 4th circuit, arguing that §230 leaves intact "distributor" liability for interactive computer service providers who possess notice of defamatory material posted through their services.
Publisher liability: liability even without knowledge of defamatory material's inclusion:
Distributor liability: liability for knowingly distributing defamatory material
Zeran argued that AOL had distributor liability once he notified them of the defamatory material.
Zeran lost. In part because he "fails to understand the practical
implications of notice liabililty in the
interactive-computer-service context"; note that the court here once again
tried to understand the reality of the internet. The court also apparently
felt that AOL was still acting more as publisher than distributor, at
least as far as §230 was concerned.
What if I quote other defamatory speakers on my blog in order to "prove my point"? Batzel v Cremers doesn't entirely settle this; it's pretty much agreed Cremers did not intend to defame Batzel. The Barrett v Rosenthal case (next) did to an extent address this situation, though.
There's also the distributor-liability issue left only partly settled in Zeran.
Barrett v. Rosenthal, Nov. 20, 2006:
California supreme court affirms core §230 ruling
The case was brought by doctors Stephen Barrett and Tim Polevoy against Ilena Rosenthal, who posted statements on an alternative-medicine newsgroup about the doctors. Barrett and Polevoy operated a website aimed at exposing fraud in alternative medicine. The statements posted by Rosenthal originated with Tim Bolen, an alternative-medicine activist, that included accusations that Dr Polevoy engaged in "stalking" in order to prevent the broadcast of a pro-alternative-medicine TV show.Dr Barrett sued Rosenthal, arguing that Rosenthal bore distributor liability for re-circulating Bolen's remarks. Barrett had warned Rosenthal about the statement, thus meeting the "notice" requirement for distributor liability.
In the case before the California Supreme
Court, the doctor [Barrett] claimed that by warning Rosenthal that Bolen's
article was defamatory, she "knew or had reason to know" that there was
defamatory content in the publication. Under traditional distributor
liability law, therefore, Rosenthal should therefore be responsible for
the substance of Bolen's statements, the doctor claimed. The court
rejected the doctor's interpretation, saying that
the statute rejects the
traditional distinction between publishers and distributors, and
shields any provider or user who republishes information online. The court
acknowledged that such "broad immunity for defamatory republications on
the Internet has some troubling consequences," but it concluded that
plaintiffs who allege "they were defamed in an Internet posting may only
seek recovery from the original source of the statement."
Barrett could still sue Bolen. But Bolen might not have had any money,
and Barrett would have to prove that Bolen's original email, as
distributed by Bolen, was defamatory. If Bolen sent it privately,
or with limited circulation, that might be difficult.
See also wikipedia article http://en.wikipedia.org/wiki/Barrett_v._Rosenthal
Rosenthal was arguably even more of an Ordinary User than Ton Cremers.
Rosenthal may very well, however, have chosen Bolen's statement
specifically because it portrayed Dr Barrett negatively. Cremers had no
Jane Doe v MySpace: §230 applies to liability re physical harm
Jane Doe acting on behalf of Julie Doe, her minor daughter She was 13 when she created a myspace page, 14 when she went on a date with someone age 19 who then assaulted her. On the face of it, Doe claims that the suit is about MySpace failing to protect children, or for failing to do SOMETHING. But the court held that it's really about lack of liability for Julie Doe's posting. Note that this isn't libel law at all. The court argued that:
It is quite obvious that the underlying basis of Plaintiff's claims is that, through postings on MySpace, *** and Julie Doe met and exchanged personal information which eventually led to ... the sexual assault.
Therefore the case is in fact about publication, and therefore MySpace is immune under §230.
In Donato [v Moldow], two members of the Emerson Borough Council [New Jersey] sued a Web site operator and numerous individuals after they used pseudonyms when posting on the Web site for "defamation, harassment, and intentional infliction of emotional distress." (74) The appellants argued that Stephen Moldow, the website operator, was liable for the damages because he was the publisher of the website. (75) Much to their chagrin, the trial judge found that Moldow was immune from liability under the Communications Decency Act, (76) and the appellate court agreed. (77) The court reasoned that:
The allegation that the anonymous format encourages defamatory and otherwise objectionable messages 'because users may state their innermost thoughts and vicious statements free from civil recourse by their victims' does not pierce the immunity for two reasons: (1) the allegation is an unfounded conclusory statement, not a statement of fact; and (2) the allegation misstates the law; the anonymous posters are not immune from liability, and procedures are available, upon a proper showing, to ascertain their identities. (78)Note that Moldow was merely the operator here; he was not doing anything to select content.
Hadley sued the paper and its parent corporation for defamation. That suit was settled within a few months, with the paper turning over to Hadley the IP address from which Fuboy's comment was posted.
Hadley then asked Comcast for the corresponding subscriber information. Comcast refused without a court order. Hadley then filed a subpoena for the account information, at which point Comcast informed Fuboy. Fuboy hired an attorney to represent Fuboy anonymously and attempt to quash the subpoena.
The lower court and Illinois appellate court both upheld Hadley's subpoena, overruling Fuboy's argument that the speech was not defamatory because (a) Hadley was a public official, and actual malice was not evident, (b) everyone posts like this on newspaper sites, so defamation cannot be inferred, and (c) the post could have referred to anyone named Sandusky.
Note that Fuboy went to considerable expense to block the release of his identity. He turned out to be Frank Cook, a part-time Stephenson County states attorney.
The actual libel case was supposed to go to trial in December 2015, but I've heard nothing. My guess is that the case was settled.
Hadley claims to have spent about $35,000 on finding Fuboy's identity.
What do you think of the idea that providing a search function causes a site to lose §230 immunity?
Dart is not running a moral crusade against prostitution per se. He is specifically trying to put an end to child prostitution; that is, the sale of sex with persons under the legal age of consent. Does that affect how we should apply §230? However, the age issue seldom was front-and-center in the Dart v Craigslist struggle; by outward appearances, Dart did sometimes seem simply to want to stop online ads for prostitution.Craigslist ads are generally free. However, they began charging in 2008 for adult-services ads, at least in part because payment would make it difficult or impossible for posters to remain anonymous (bitcoin notwithstanding). Overall, it seems Craigslist was not happy with these ads, but did not implement an "individual review" policy until 2011.
What do you think of that potential §230 limitation: that to receive §230 protection you must do at least some content monitoring? If you don't, you can maybe fall back on the Cubby v Compuserve defense that you have only distributor liability. Should the "distributor" classification apply to a site like Craigslist if they did no monitoring?
On the one hand, Model Mayhem argued it is completely covered by §230. On the other hand, they appear to have made no effort to warn prospective models that not all "modeling agencies" are legitimate, and may have in fact known that the perpetrator in Doe's case, Lavont Flanders, had assaulted other MM models.
In 2013 the District court ruled that §230 protects Model Mayhem. In 2014 the Ninth Circuit ruled that §230 did not apply. In 2015 they withdrew that opinion. In May 2016 the Ninth Circuit issued a new opinion, in which they assert that MM's "failure to warn" was not tied to their protection from user-posted content. They draw a distinction from Doe v Myspace, in which the assault was tied to Doe's postings. The actual case was remanded back to the District court for trial, partly on the issue of whether MM actually did have a duty to warn.
As of 2018, the District Court has apparently not yet ruled.If this ruling stands, it would mean Uber might not be able to claim §230 immunity for assaults by its drivers. Uber's business model is, superficially, to allow riders and drivers to contact one another (by posting on a site, in effect) to negotiate rides; taken literally, §230 might apply. Uber is, however, much more in charge of riders and, in particular, drivers than it sometimes admits.
The catch? The sued party named "Bob" has no relationship to the real Bob. I am not making this up. In several instances of this kind of "fake lawsuit", reputation-management companies have been implicated.
Suppose Bob posted his opinion of a restaurant, or an honest criticism of service he received somewhere. What speech protections is Bob entitled to?
Conversely, of course, suppose Alice wins a legitimate defamation lawsuit against Bob. Under §230 today, Charlie.com would have no obligation to remove the content. Is that fair to Alice?
In many cases, the target served with the takedown notice is not Charlie.com, but Google.com. Should Alice be able to have Bob's comments entirely de-indexed?
Former Sony chairman Michael Lynton used a possibly legitimate process to have an unflattering Gawker article by Sam Biddle removed from the Gawker archives (though not from Google search itself). The article, based on emails released in the 2014 Sony hack, alleged that Lynton donated a significant sum of money to Brown University in the hopes of improving his daughter's chances of admission.
See washingtonpost.com/news/volokh-conspiracy/wp/2017/05/11/how-a-former-sony-chairman-de-indexed-an-article-based-on-his-sony-hack-e-mails for further information about what actually happened.
The Gawker link is here: gawker.com/how-the-rich-get-into-ivies-behind-the-scenes-of-elite-1699066450. But all it says is "Page Not Found".
If one tries to search for the article in the Wayback Machine, one gets this: web.archive.org/web/*/http://gawker.com/how-the-rich-get-into-ivies-behind-the-scenes-of-elite-1699066450. It says "Page cannot be displayed due to robots.txt".
Another article about the event, from the prestigious Chronicle of Higher Education, is here: chronicle.com/blogs/ticker/brown-gave-special-admissions-treatment-to-donors-daughter-hacked-emails-show/97599. This article includes a link to the now-deleted Gawker article. A Buzzfeed article is here: buzzfeed.com/stevenperlberg/a-sony-hack-story-has-been-quietly-deleted-from-gawker.
The online-brokerage argument fits Airbnb even better. Real estate owners, in theory, use Airbnb to post ads for their properties, much like they might on Craigslist. What responsibility should Airbnb have? Uber, at least, sets fairs and assigns a driver to a passenger.
San Francisco, in an effort to fight Airbnb listings (which reduce the already limited housing stock), passed a law making it a criminal offense to provide booking services for unlicensed rental units, which most Airbnb listings are. Airbnb objected, based on §230, but San Francisco had written the law so that it applied at the point the booking service actually booked the unit and took its commission. The city argued this had nothing to do with listings and was a legitimate attempt to regulate commerce; at the district court level, Airbnb lost.
So what Airbnb did was to settle. They worked out an expedited short-term-rental-licensing process, and they assist unit owners wanting to list with Airbnb in the licensing process.
Airbnb also, in December 2017, dropped a lawsuit against New York City, which had passed a law levying heavy fines on those listing unlicensed units with Airbnb.
Airbnb did have a strong §230 defense in both cases, particularly in the SF case. SF was holding Airbnb criminally liable for the actions of the unit owners. Nonetheless, Airbnb is now well established, and can afford to comply with regulations. These regulations do help keep out competitors, for example. The settlements also allow Airbnb to pursue becoming a public company (that is, its IPO), without the cloud of pending litigation.
The plaintiffs were survivors of attacks by Hamas; they sued Facebook on the legal theory that Facebook "supported terrorist organizations by allowing those groups and their members to use its social media platform to further their aims." In particular:
Facebook allows [Hamas], its members, and affiliated organizations to operate Facebook accounts in their own names, despite knowledge that many of them have been officially named as terrorists and sanctioned by various governments.
On May 18, 2017 the judge dismissed the case on §230 grounds: Facebook is not responsible for the content posted by terrorists.
Had the plaintiffs won, it is hard to see how any messaging or email service would emerge unscathed: a side effect of allowing communications is that sometimes people communicate evil things.
Note that we are quite a way aways from defamation here. Note also that, as there were 20,000 plaintiffs, Facebook's potential liability might have been huge.
More at blog.ericgoldman.org/archives/2017/05/facebook-defeats-lawsuit-over-material-support-for-terrorists-cohen-v-facebook.htm.
Backpage.com is a competitor to CraigsList.com, in that it runs free classified ads for a wide variety of sales and services. It was originally founded by Village Voice Media, which originally ran a group of alternative newspapers with a strong tradition of protecting the rights of the downtrodden.
The problems Backpage faced (and also Craigslist) is that, in retrospect, there seems to be an ineluctable progression:
Backpage started out with a section for "matching" ads; that is, dating ads; the predecessor alternative papers had had a decades-long tradition of this.
These online dating ads eventually morphed into ads for "professional" dating; that is, prostitution. Many of the people (mostly women) running these ads were running their businesses as sole proprietors; that is, not under the direction of a so-called "pimp". Eventually, ads for prostitution displaced ads for "genuine" dating. This displacement may be specific to Backpage, and may result from Backpage's making negligible effort to delete prostitution ads. Match.com seems to have a few prostitution ads, but overall it seems to be less of a problem there.
And then, among the prostitution ads, started appearing ads for minor girls, as purveyors of underage prostitution took advantage of Backpage as a place to advertise. Ads for girls as young as thirteen would appear. The girls were usually held in de facto captivity. Some had been lured from home and some were runaways, but they generally had nowhere else to go. There seems to be no solid data on the percentage of prostitution ads that featured underage girls.
Underage prostitution is often described with the term "sex trafficking", although there are also some adults who fall victim to forced prostitution. Nobody is quite sure how many people are in the latter category, though it is probably not a large number. For that matter, solid data for underage prostitution itself is often not available.
Backpage made quite a bit of money from running these ads. While they claimed to be appalled by child prostitution, they did not take all the steps that Craigslist took to reduce prostitution generally. They did implement some steps to limit the ads; for example, they hired moderators to review ads, and pull the ones that were overtly ads for children. However, terms like "Lolita", "young", "fresh", "new in town" or "off the boat" were often allowed; these and others were widely acknowledged codewords for underaged girls.
Backpage has been widely criticized for being the prime marketplace for advertisements for underage prostitution. It is not clear what fraction of Backpage's revenue, however, came from this. Backpage's annual net revenue eventually climbed to well over $100 million per year.
For a while, Backpage cooperated with the National Center for Missing and Exploited Chidren. Eventually, though, NCMEC decided Backpage was not doing enough, and began working (with others) to shut Backpage down.
The Wikipedia page wikipedia.org/wiki/Backpage lists numerous lawsuits against Backpage, most of which were dismissed on §230 grounds.
While Backpage certainly played a role in "facilitating" underage prostitution, in the sense that those profiting from this activity used Backpage to run ads, "facilitating" doesn't by itself suggest much direct responsibility. Cell-phone providers "facilitate" prostitution (and illegal drug sales), for example, by making it easier to arrange meetings with clients, but nobody argues that cellular networks should be shut down as a result.
Another way to look at the question is to ask whether underage victims would be better off if Backpage shut down, or else were successful in blocking all ads for underage prostitution. Did Backpage create the marketplace, or was it simply a generic marketplace that was widely adopted by traffickers?
If Backpage disappears from this market (as seems likely, below), other websites are likely to take its place. Some of these are likely to be offshore, in tolerant jurisdictions. Others are likely to be "onion sites", at undetectable locations and reachable only via the Tor browser. In this sense, stopping Backpage will have no effect on stopping the problem. Newer websites, overtly illegal, are certain to dispense with the limited moderation Backpage offered.
Most of the victims described in the film I Am Jane Doe were actually rescued because police or family found their pictures on backpage.com. This has been used to suggest that online advertising for underage prostitution is not entirely a bad thing here; it allows the police an avenue to investigate.
Still, Backpage did a weak job of moderation. Some evidence suggests their moderation became weaker as time went on. More ads meant more money.
The name stands for Stop Advertising Victims of Exploitation; it is not related to the Campus SaVE act. It is part of the Justice for Victims of Trafficking act. It was signed into law in 2015, and represents the first legal step away from blanket §230 immunity. It creates a criminal liability for websites that knowingly run prostitution advertisements for victims of "sex trafficking". This term is sometimes used to refer to adults coerced into sex work, but seems to be used more often as a synonym for child prostitution (which is in fact usually coerced). The law's principle goal appears to be ending underage-prostitution ads on Backpage.com, which continues to run ads for "escorts". (In January 2017 Backpage temporarily removed all content from its "adult" sections, but it seems to have come back.)
Prevention of underage prostitution is unarguably an important goal. Some people (mostly at the right edge of the political spectrum), however, regard all prostitution as inherently "forced"; this seems a bit of an overstatement. In 2016 Amnesty International adopted a policy advocating for decriminalization of prostitution. They took this position to make it easier to fight underage prostitution. In any event, anti-sex-trafficking laws are seen by some as blocking child prostitution, and by others as cracking down on adult prostitution.
It does appear that the vast majority of adult sex workers have chosen their occupation voluntarily.
The San Francisco site myredbook.com was shut down in 2014, apparently for running ads specifically for prostitution, with no generic "dating" cover. The article http://www.wired.com/2015/02/redbook/ has some anecdotes on this, but little data. Another article is http://www.rawstory.com/2014/07/fbi-seizure-of-my-red-book-website-spurs-san-francisco-bid-to-decriminalize-prostitution/, suggesting there is increasing sentiment to decriminalize prostitution in San Francisco.
The central §230 difficulty with the SAVE act is that it appears to make it impossible for a site to run ads for sex work, or even for dating, given the following:
Age verification is certainly one approach, but most persons involved in sex work are loathe to turn over identification to an organization that may have its records seized by the police.
As of Spring 2018, Congress is considering the FOSTA-SESTA Act (FOSTA in the House and SESTA in the Senate). This is an even stronger step away from §230, though again applying only to online listings related to prostitution. It establishes civil liability for trafficking victims -- that is, a right to sue the website in question. (It passed the House and Senate on March 23.)
The FOSTA-SESTA Act is, like the SAVE act, nominally aimed at "sex trafficking", which is often equated with child prostitution. Unfortunately, the FOSTA-SESTA Act does not make clear distinctions between "sex trafficking" and conventional prostitution, and seems likely to ban all online advertising for sex work.
While some of the goals of the FOSTA-SESTA Act are socially important, here are some things to think about:
Following Congressional passage of FOSTA-SESTA, Craigslist decided to stop running "dating" ads, though in the past few years this category seems to have been largely overrun by prostitution. In the post-FOSTA-SESTA climate, it is hard to see how a website running dating ads could escape liability, though at Match.com the problems (and there are some) seem to have been manageable. Match.com charges users a significant fee; this can support much more verification than free sites. Perhaps user reporting of suspicious ads will be effective. Perhaps, however, online dating is doomed.
|Rebecca Sedwick||Sept 9, 2013||Florida; Ask.fm and others; two other students arrested on felony charges|
|Jessica Laney||Dec 11, 2012||Florida; Ask.fm|
|Ciara Pugsley||Sept 29, 2012||Ireland; Ask.fm|
|Megan Meier||Oct 17, 2006||Lori Drew created fake account|
|Ryan Halligan||Oct 7, 2003||Vermont|
1. Where an information society service is provided that consists of the transmission in a communication network of information provided by a recipient of the service, or the provision of access to a communication network, Member States shall ensure that the service provider is not liable for the information transmitted, on condition that the provider:
(a) does not initiate the transmission;
(b) does not select the receiver of the transmission; and
(c) does not select or modify the information contained in the transmission.
See Baase 4e p 148 / 5e p 154
1996: AOL v Cyber Promotions
Note that CP initially sued AOL for blocking CP's spam! Eventually AOL sued CP.
Intel-Hamidi case: Ken Hamidi sent email to 30,000 intel employees. Intel sued. It eventually reached the California Supreme Court, who ruled in Hamidi's favor. Hamidi's emails were about how Intel employees could better understand their employment rights.
Harris Interactive sued the Mail Abuse Prevention System, for blocking their opinion-poll email. One interesting claim by Harris is that they were "turned in" to MAPS by a competitor. Harris dropped the suit.
People have a right to send email. Sort of.
(university of phoenix)
||placeholder site, but see here
||domain lookup error
||Ah, but there are
anti-GM sites! Well, were.
||placeholder; I gave up on my 1990
Chrysler in 2015
||these folks are really ticked off!
||everything is user-contributed. 2018:
leads to coin.space
||Completely gone, but was a
serious site on linux improvement
In the late 1980's, Dave Morris and Helen Steel participated with others
in an organization known as London Greenpeace (unaffiliated with
Greenpeace International) with handing out leaflets (remember those?) at
local McDonalds stores. The leaflets made claims such as the following:
Note that their story had NOTHING to do with the internet! Though,
today, the group most likely would have a website.
McDonalds had done a great deal of investigating; they had hired spies to infiltrate London Greenpeace to get names of members involved. This wasn't entirely coordinated; two spies spied on each other for an extended period. Another spy had a long romantic relationship with a real member.
In 1990, McDonalds sued everyone in the group for libel. Everyone folded except for Morris and Steel.
The case went on for two and a half years, the longest civil case in English history. Morris & Steel raised £35,000 for their defense, most of which apparently went to paying for transcripts.
Technically, Morris and Steel lost. Recall that in England the defense in a libel trial has to prove their claims are true; in the US the plaintiff must prove the claims false. From http://mcspotlight.org/case/trial/story.html:
Mr Justice Bell took two hours to read his summary to a packed court room. He ruled that Helen and Dave had not proved the allegations against McDonald's on rainforest destruction, heart disease and cancer, food poisoning, starvation in the Third World and bad working conditions. But they had proved that McDonald's "exploit children" with their advertising, falsely advertise their food as nutritious, risk the health of their most regular, long-term customers, are "culpably responsible" for cruelty to animals, are "strongly antipathetic" to unions and pay their workers low wages.
And so, Morris & Steel were held liable for £60,000 in damages.
As a practical matter, though, McDonalds was exposed throughout the case to over five years of increasingly bad -- make that dreadful -- press. Most ordinary people were offended that a huge corporation would try so hard to squelch criticism; the business community was none too supportive of McDonalds either. McDonalds' bungling spies didn't help any.
On appeal, the court agreed with Morris and Steel that McDonalds' food might be considered "unhealthy", but Morris and Steel had also claimed it was "carcinogenic". The judgment was reduced to £40,000
On 15th February 2005, the European Court of Human Rights in Strasbourg declared that the mammoth McLibel case was in breach of the right to a fair trial (because Morris and Steel were not provided with an attorney) and right to freedom of expression.
The bottom line for McDonalds, and for corporations generally, is that taking a critic to court for libel can be a very risky strategy. (There are still many individual libel lawsuits, but these are expensive.)
The phrase Libel Terrorism is a play on "libel tourism", the practice of suing for libel in the UK (or another friendly venue, though it's hard to beat the UK's "defendant must prove truth" doctrine, plus the "plaintiff need not prove malice" part).
New York now has the Libel Terrorism Protection Act.
Case: Sheikh Khalid bin Mahfouz v
Rachel Ehrenfeld wrote Funding Evil,
a rather polemical book about how terrorist organizations gain funding
through drug trafficking and other illegal strategies. The first edition
appeared in 2003. The book apparently alleges that Sheik Khalid bin
Mahfouz is a major participant in terrorist fundraising. Mahfouz
sued in England, although the book was not distributed there; however, 23
copies were ordered online from the US. In 2005 the court in England found
in Mahfouz's favor, describing Ehrenfeld's defense as "material of a
flimsy and unreliable nature" (though some of that may have been related
to the costs of mounting a more credible defense, and Ehrenfeld's
conviction that no such defense should be necessary), and ordered
Ehrenfeld to pay $225,000.
Ehrenfeld filed a lawsuit against Mahfouz in the US, seeking a
declaration that the judgment in England could not be enforced here. The
case was dismissed because the judge determined that the court lacked
jurisdiction over Mahfouz. A second ruling arriving at the same conclusion
came in 2007.
In May 2008, New York state passed the Libel Tourism Protection Act, that
offers some form of protection against enforcement in New York state of
libel claims from other countries. However, Mahfouz has not sought to
collect, and probably will not.
(compare wto.org and wipo.int)
This is vaguely related to McLibel-type sites, in that this is an attack
on the "real" WTO (which was formerly the Generalized Agreement on Trade
& Tariffs, or GATT). Is this funny? Or serious? Are there legitimate
Note that it keeps changing.Try to find the links that are actually there.
With libel, the Ninth Circuit in the Batzel-v-Cremers case interpreted §230 as saying you have immunity for posting material originated from someone else, if your understanding was that the other party intended the material for posting.
With "threat speech", the courts have long held that speech qualifies as that if a reasonable listener (or reader) feels that a threat is intended. Your intentions may not count at all.
In the case Planned Parenthood v American Coalition of Life Activists (ACLA, not to be confused with ACLU, the Americal Civil Liberties Union), Planned Parenthood sued ACLA for a combination of "wanted" posters and a website that could be appeared as threatening abortion providers. In early 1993 a "wanted" poster for Dr David Gunn, Florida, was released; on March 10, 1993 Dr Gunn was murdered. Also in 1993, a wanted poster for Dr George Patterson was released and on Aug 21, 1993 Dr Patterson was subsequently murdered, although there was never a claim that he was murdered for providing abortion services, and there was some evidence that his murder was part of a random altercation. Two days before, Dr George Tiller was shot and wounded in Kansas; Tiller was murdered in a later attack May 31, 2009. In 1994 a poster for Dr John Britton, Florida, was released; Dr Britton was later murdered, along with James Barrett, on July 29, 1994. Dr Hugh Short was shot November 10, 1995; he was not killed, but he could no longer perform surgery.
There was never any evidence that the ACLA itself participated in any of
the assaults, or had any direct contact with those who did; there were
plenty of individual antiabortion extremists who were apparently willing
to carry these out on their own.
I've never been able to track down any of these individual posters (which
is odd in and of itself), but here's a group one:
When US Rep Gabrielle Giffords (D, AZ) was shot in January 2011, some
people pointed to the poster below from Sarah Palin's site, and from her
twitter line, Don't
Instead - RELOAD! A June 2010 post from Giffords election opponent
Jesse Kelly said, "Get
Target for Victory in November Help remove Gabrielle Giffords from
office Shoot a fully automatic M16 with Jesse Kelly"
But there are multiple differences. Perhaps the most important is that no
new crosshair/target/wanted-style posters have been released by anyone
since the Tucson shootings. Under what circumstances might people view
this kind of poster as a threat? Should candidates and political-action
committees be required to address perceived threats?
Neal Horsley was an anti-abortion activist pretty much unaffiliated with ACLA. He maintained a website he called the "Nuremberg Files" site, with the nominal idea of listing names of abortion providers for the day when they might be tried for "crimes against humanity" (in genuine crimes-against-humanity cases, the defense "it was legal at the time" is not accepted).
On Oct 23, 1998, Dr Bernard Slepian was killed at home. The day before, according to Horsley, his only intent was to maintain a list of providers; the day after, he added Dr Slepian's name with a strikethrough. Strikethroughs were also added to the names of Drs Gunn, Patterson and Britton. Dr Slepian's name had not been on Horsley's list at all before Slepian's murder, leading Horsley to protest vehemently that his site could not have been a threat against Slepian. The lawsuit, however, was filed by other physicians who felt it was a threat to them; Horsley is silent on this.
At the conclusion of the trial, the judge ordered (among other things) that Horsley not use the strikethrough any more.
Why would a judge issue rules on what typestyle (eg strikethrough)
a website could use? Did the judge in fact issue that ruling, or is that
just an exaggeration from the defendants? The actual injunction (from the
District Court link, below) states
That is much more general than just "no strikethrough", though the strikethrough was widely interpreted as a "specific intent to threaten". But intent is notoriously hard to judge, and in fact (as we shall see) the case ended up hinging more on the idea that Horsley's site would be interpreted as a threat by a neutral observer.
If you create a website, who should interpret your intent?
Here is an archive of Horsleys' site with the original strikethrough: aborts2.html (Dr Gunn is column 2 row 8).
After the Ninth Circuit's ruling, Horsley replaced his list of names with a pro-abortion site's list of providers who had been injured or murdered; an abbreviated archive is at aborts.html. Lower down on his original version of this page, Horsley used strikethrough for names of women who had died as a result of receiving an abortion.
Horstley also created a separate page discussing the Ninth Circuit's
ruling and a related California provider-privacy law, in an effort to
explain his position. He reproduced part of his original list with
strikethroughs. A portion of this page is archived at californicate.htm.
After looking at these, consider Horsley's claim,
Do you think this is an accurate statement?
The civil case was filed in 1995, after some abortion providers had been murdered (eg Dr Hugh Short) and "wanted" posters were issued by ACLA for others. There was a federal law, the 1994 Federal Freedom of Access to Clinic Entrances Act (FACE), that provided protections against threats to abortion providers.
Horsley's site was created in 1997, and was added to the case; apparently Horsley himself was not added. By 1997, the internet was no longer new, but judges were still having difficulty figuring out what standards should apply. In retrospect it seems reasonable to think that, if it were not for the context created by the "Wanted" posters, there would have been no issue with the Nuremberg Files web pages.
Horsley's actual statements are pretty much limited to facts and to opinions that are arguably protected. He does not appear to make any explicit calls to violence.
Planned Parenthood, on the other hand, claimed the site "celebrate[s] violence against abortion providers".
For a while, Horsley was having trouble finding ISPs willing to host his
site. The notion of ISP censorship is an interesting one in its own right.
The Stanford site, below, claims that OneNet, as the ISP (carrying traffic
only) for the webhosting site used by Horsley, demanded that Horsley's
content be removed.
Here's a Stanford student group's site about the case.
The central question in the case is whether the statements amounted to a "true threat" that met the standard for being beyond the bounds of free-speech protection.
The District Court judge (1999) gave the jury instructions to take into account the prevailing climate of violence against abortion providers; the jury was also considering not an ordinary civil claim but one brought under the Freedom of Access to Clinic Entrances act (FACE), which allows lawsuits against anyone who "intimidates" anyone providing an abortion. (The first-amendment issue applies just as much with the FACE law as without.) The jury returned a verdict against the ACLA for $100 million, and the judge granted a permanent injunction against the Nuremberg Files site (Horsley's).
The DC Judge wrote (full order at PPvACLA_trial.html):
Under current free-speech standards (for criminal law at least), you ARE allowed to threaten people. You ARE allowed to incite others to violence.
However, you are NOT allowed to incite anyone to imminent violence, and you are NOT allowed to make threats that you personally intend to carry out. Did the ACLA do either of these things? Does it matter that this was a civil, not criminal, case?
The case was appealed to a 9th Circuit 3-judge panel, which overturned
the injunction. Judge Kosinski wrote the decision, based on NAACP
v Claiborne Hardware, SCOTUS 1982.
NAACP v Claiborne Hardware summary: This was, like PP v ACLA, a civil case. The NAACP had organized a boycott in 1968 of several white-owned businesses, and had posted activists to take down names of black patrons; these names were then published and read at NAACP meetings. The NAACP liaison, Charles Evers [brother of Medgar Evers] had stated publicly that those ignoring the boycott would be "disciplined" and at one point said "If we catch any of you going in any of them racist stores, we're gonna break your damn neck."
A local merchant association sued the NAACP for lost business. The Supreme Court found in the NAACP's favor, on the grounds that the boycott itself was protected speech under the First Amendment. Also, there was no evidence Evers had authorized any acts of violence, or even made any direct threats (eg to specific individuals); Evers' "speeches did not incite violence or specifically authorize the use of violence".
Judge Kozinski argued that whatever the ACLA was doing was less
threatening than what Evers was doing, and therefore dismissed the case.
However, another feature of the Claiborne case was that, while there were several incidents of minor violence directed at those who were named as violators of the boycott, in fact nobody was seriously harmed. And, of course, the ACLA's "wanted" posters were indeed directed against specific individuals.
Furthermore, the merchants who brought the Claiborne case had experienced essentially no violence whatsoever; the Supreme Court found that the nonviolent elements of the boycott were protected speech. The plaintiff merchants had no standing to address the allegations of violence. Seen that way, the Claiborne case offers no precedent relevant to the ACLA case. Claiborne is not really about the right to make vague threats.
The full Ninth Circuit then heard the case, en banc.
The ruling was by Judge Rymer, with dissents by judges Reinhardt, Kozinski (writer of the decision of the three-judge panel that heard the case), and Berzon (of Batzel v Cremers)
5 pages of plaintiffs / defendants
Here is Rymer's problem with the NAACP v Claiborne analogy: 7121/41, at 
Even if the Gunn poster, which was the first "WANTED" poster, was a purely political message when originally issued, and even if the Britton poster were too, by the time of the Crist poster, the poster format itself had acquired currency as a death threat for abortion providers. Gunn was killed after his poster was released; Britton was killed after his poster was released; and Patterson was killed after his poster was released.
Neil Horsley claims no one was listed on the Nuremberg Files list until after they were attacked.
But more importantly, does the temporal sequence above of "first a poster, then a crime" constitute a "true threat"?
Here's Rymer's summary: 7092/12, 3rd paragraph
We reheard the case en banc because these issues are obviously important. We now conclude that it was proper for the district court to adopt our long-standing law on "true threats" to define a "threat" for purposes of FACE. FACE itself requires that the threat of force be made with the intent to intimidate. Thus, the jury must have found that ACLA made statements to intimidate the physicians, reasonably foreseeing that physicians would interpret the statements as a serious expression of ACLA's intent to harm them because they provided reproductive health services. ...
7093/13 We are independently satisfied that to this limited extent, ACLA's conduct amounted to a true threat and is not protected speech
Threats are not the same as libel: 7099/19
Section II: (p 7098/18) discussion of why the court will review the facts (appeals courts sometimes don't) as to whether ACLA's conduct was a "true threat"
Section III (p 7105) ACLA claims its actions were "political speech" and not an incitement to imminent lawless action. Posters have no explicitly threatening language!
7106/26, end of 1st paragraph:
This is a core problem: can context be taken into account? Can possible actions of others be taken into account?
The text of the FACE law:
Whoever... by force or threat of force or by physical obstruction, intentionally injures, intimidates or interferes with or attempts to injure, intimidate or interfere with any person because that person is or has been [a provider of reproductive health services] [n]othing in this section shall be construed . . . to prohibit any expressive conduct ... protected from legal prohibition by the First Amendment
This subjects them to civil remedies, though perhaps not prior restraint.
The decision cited the following Supreme Court cases:
Brandenburg v Ohio, criminal case, SCOTUS 1969: The First Amendment protects speech advocating violence, so long as the speech is not intended to produce "imminent lawless action" (a key phrase introduced) and is not likely to produce such action.
This was an important case that strengthened and clarified the "clear and present danger" rule (speech can only be restricted in such situations) first spelled out in Schenck v US, 1919. Brandenburg introduced the "imminent lawless action" standard.
Clarence Brandenburg was a KKK leader who invited the press to his rally, at which he made a speech referring to the possibility of "revengeance" [sic] against certain groups. No specific attacks OR TARGETS were mentioned.
Robert Watts v United States, criminal case, SCOTUS 1969. Watts spoke at an anti-draft rally at a DuBois Club meeting:
"They always holler at us to get an education. And now I have already received my draft classification as 1-A and I have got to report for my physical this Monday coming. I am not going. If they ever make me carry a rifle the first man I want to get in my sights is L.B.J."
Watts' speech was held to be political hyperbole. This case overturned long precedent regarding threats.
Particular attention was given to NAACP v Claiborne, considered above. The crucial distinction: there was no actual violence then! The Supreme Court's decision was in effect that Evers' speeches did not incite illegal activity, and thus he could not be found liable for any business losses. No "true threat" determination was made nor needed to be made.
Also, Evers' overall tone was to call for non-violent actions such as social ostracism.Here is another important case cited as a precedent, also decided by the Ninth Circuit, in which a threat was ruled legitimate:
Albert Roy v United States,
criminal case, Ninth Circuit, 1969:
USMC private Roy heard that then-President Nixon was coming to the base,
and said to a telephone operator "I hear the President is coming to the
base. I am going to get him". Roy's conviction was upheld,
despite his insistence that his statement had been a joke, and that he had
promptly retracted it. This case was part of a move to a "reasonable
person" test, eventually spelled out explicitly by the Ninth Circuit in
its case United States v
Whether a particular statement may properly be considered to be a threat is governed by an objective standard -- whether a reasonable person would foresee that the statement would be interpreted by those to whom the maker communicates the statement as a serious expression of intent to harm or assault.
Note this "reasonable person" standard. On the one hand, this means no hiding behind "that's not really what we meant"; on the other hand, what if violence is not what you really meant? In Roy's case, part of the issue was that the threat was reasonable enough to frighten the telephone operator, and thus to affect the security preparations for the upcoming visit.
(All three of these last cases were criminal cases. In the 2015 Elonis case, below, the Supreme Court ruled that the "reasonable person" standard was not normally sufficient for criminal threat prosecution; this standard was for civil cases only. The PP v ACLA case was, of course a civil case.)
It is not necessary that the defendant intend to, or be able to carry out his threat; the only intent requirement for a true threat is that the defendant intentionally or knowingly communicate the threat.
The defendant must communicate it as a serious threat, that is, not just hyperbole.
In an amicus brief, the ACLU argued the person must have intended to threaten or intimidate.
Rymer: this intent test is included in the language of FACE; ACLA has met this test long ago. Did ACLA intend to "intimidate"? Or were the "wanted" posters more hyperbole?
Two dissents in the decision argue that the speaker must "actually intend to carry out the threat, or be in control of those who will"
But Rymer argues that the court should stick with the "listener's reaction"; ie the reasonable-person standard again.
Here's the conclusion of Rymer's line of argument on intent v how it is heard:
7116/36, at  Therefore, we hold that "threat of force" in FACE means what our settled threats law says a true threat is: a statement which, in the entire context and under all the circumstances, a reasonable person would foresee would be interpreted by those to whom the statement is communicated as a serious expression of intent to inflict bodily harm upon that person. So defined, a threatening statement that violates FACE is unprotected under the First Amendment.
Crucial issue: the use of the strikeout and grey-out. This is what crosses the line.
7138/53, 2nd paragraph:
The Supreme court refused to hear the case. The Ninth Circuit had
established that the speech in question met certain standards for being a
true threat, and the ACLA would have had to argue that some factual
interpretations were mistaken. But the Supreme Court does not generally
decide cases about facts; they accept cases about significant or
conflicting legal principles.
See also Baase, 4e p 173, Exercise 3.22 below(omitted from 5e); note that
the "controversial appeals court decision" refers to the three-judge
panel, reversed by the en banc decision.
Finally, you might wonder why, with all the threats of violence made during the course of the civil rights movement by whites against blacks, the case NAACP v Claiborne that comes to us is an allegation of violence by blacks against blacks, filed by whites. I think it's safe to say that the answer has nothing to do with who made more threats, and everything to do with who could afford more lawyers.
this was a book published by Paladin Press, written by "Rex Feral". There is a story circulating that the author is a woman who writes true-crime books for a living, but this seems speculative. It is likely not written by an actual hit man.
In 1993, James Perry murdered Mildred Horn, her 8-year-old son Trevor, and nurse Janice Saunders. He was allegedly hired by Lawrence Horn. In Rice v Paladin Enterprises (1997), the federal court of appeals (4th circuit) held that the case could go to jury trial; ie freedom-of-press issues did not automatically prevent that.
Many of the specifics of the Perry murders were out of the book. Many of them are rather compellingly "obvious": pay cash, rent a car under an assumed name, steal an out-of-state license plate, use an AR-7 rifle (accurate but collapsible), make it look like a robbery
The book also explains how to build a silencer, which is not at all
obvious; Perry allegedly did just this.
The following are from the judge's decision. "Stipulations" are alleged
facts that are not being contested at the present time.
"The parties agree that the sole issue to be decided by the Court . . . is whether the First Amendment is a complete defense, as a matter of law, to the civil action set forth in the plaintiffs' Complaint. All other issues of law and fact are specifically reserved for subsequent proceedings." (emphasis added)
Notwithstanding Paladin's extraordinary stipulations that it not only knew that its instructions might be used by murderers, but that it actually intended to provide assistance to murderers and would-be murderers which would be used by them "upon receipt," and that it in fact assisted Perry in particular in the commission of the murders of Mildred and Trevor Horn and Janice Saunders, the district court granted Paladin's motion for summary judgment and dismissed plaintiffs' claims that Paladin aided and abetted Perry, holding that these claims were barred by the First Amendment as a matter of law.
What's going on here? Why did Paladin stipulate all that? It looks to me like Paladin was acknowledging the hypotheticals as part of its claim that they didn't matter, that the First Amendment protected them.
The court ruled it did not:
Past cases that lost:
Brandenberg v Ohio [discussed above under PP v ACLA] was cited as a case of protected speech advocating lawlessness. But this case, due to Paladin's stipulations [!!], was much more specific.
A popular theory was that after Paladin Press settled the case (which they did, under pressure from their insurer), the rights to the book ended up in the public domain. Paladin claims otherwise, and this theory makes no sense under copyright law. However, the Utopian Anarchist Party promptly posted the entire book at overthrow.com, and Paladin, no longer able to profit from the book, was completely uninterested in takedown efforts. (The bootleg copies don't have the diagrams, though.)
It has been claimed that Hit Man
was sold almost entirely to non-criminals who simply like
antiestablishment stuff. However, this is (a) speculative (though likely),
and (b) irrelevant to the question of whether some
criminals bought it.
Look at the current Paladin
website. Does it look like their primary focus is encouraging
criminals? Secondary focus?
To find Hitman, google "hit man" "rex feral", or search Amazon.com. Most references as of 2009 were to those selling used copies of the physical book; in 2016 Google lists more online copies of the book and articles about its history with Paladin. Check out Amazon.com for current prices of used editions. The site http://mirror.die.net/hitman still has the online text.
Other bad materials:
Note the Encyclopedia of Jihad has a significant political/religious component!
4th-circuit opinion: http://www.bc.edu/bc_org/avp/cas/comm/free_speech/rice.html
See also Marc Greenberg's article at http://www.btlj.org/data/articles/18_04_05.pdf.
(Quotes below not otherwise cited are from Greenberg's article.)
Yahoo offered Nazi memorabilia for sale on its auction site. They were
sued by LICRA (originally the LIgue
Contre le Racisme et l'Antisémitisme; later the Ligue Internationale
Contre le Racisme et l'Antisémitisme), joined by the UEJF, the Union of
French Jewish Students. In France the sale of Nazi memorabilia is illegal.
This was a civil case; no criminal charges against Yahoo executives were
ever filed and no Yahoo execs were arrested while changing planes in
This is a JURISDICTIONAL case that probably should
be discussed elsewhere, except that it addresses a free-speech issue. But
this is as good a time as any to start in on some of the rationales for a
given court's claiming judicial jurisdiction related to an action that
occurred elsewhere. Here are some theories, more or less in increasing
order of "engagement":
The LICRA v Yahoo case was heard in Paris by Judge Jean-Jacques Gomez,
who explained the French law as follows:
Whereas the exhibition of Nazi objects for
purposes of sale constitutes a violation of French law ..., and even more
an affront to the collective memory of a country profoundly traumatised by
the atrocities committed by and in the name of the criminal Nazi regime
against its citizens and above all against its citizens of the Jewish
faith . . . .
Judge Gomez decided they did have jurisdiction to hear the case. But
Yahoo US has no assets in France! There was a separate company, Yahoo
France, that controlled the yahoo.fr domain.
Judge Gomez based his jurisdictional decision on the so-called effects test: that the actions of Yahoo US had negative effects within France. Intent, or targeting, or direction do not enter; the effects test is perhaps the weakest basis for claiming jurisdiction. Gomez later explained some of his reasoning in an interview:
For me, the issue was never whether this was an American site, whether Yahoo had a subsidiary in France, the only issue was whether the image was accessible in France. It is true that the Internet creates virtual images, but to the extent that the images are available in France, a French judge has jurisdiction for harm caused in France or violations of French law.
Gomez issued his first interim order on May 22, 2000: that Yahoo US must use geolocation software to block access to its auction materials within France. It was estimated that 70% of French citizens could be blocked by the software alone, and that another 20% would be blocked by adding a page that said
To continue, click here to certify that you are not in
What would the purpose of that be? Clearly, French neo-Nazis would likely
simply lie. However, other French citizens would be reminded that these
objects violated French law. What is the purpose of laws?
The 9th Circuit Appellate court, ruling en
banc, held that the US likely did have jurisdiction in the case
against LICRA and UEJF, specifically because of LICRA and UEJF's actions
against Yahoo US in French court. BUT the case was directed to be
"dismissed without prejudice", as it was not yet ready to be decided. It
was not in fact "ripe"; there was
no active controversy.
(same thing happened to US v Warshak, when the 6th circuit en
banc ruled the question was not "ripe")
The appellate decision was based squarely on the idea that Yahoo US insisted that its change of policy regarding the sale of "hate" artifacts was not related to the French case. As a result of that, Yahoo could not show that their speech was in any way chilled. Therefore, there was no actual controversy. The Appellate court also took into account the lack of interest on the part of LICRA and UEJF of pursuing the penalties. Finally, paradoxically, the Appellate court hinted that Yahoo could not really have believed that, if LICRA or UEJF did ask for penalties, that any US court would have gone along; any US court would reject such a judgment (perhaps on First Amendment grounds despite the 9th circuit's wording here):
Ironically, because Yahoo took the ethical
approach of banning the sale of hate materials, their legal
case became moot.
Judge William Fletcher:
1. Here is a summary of Yahoo's position:
For its part, while Yahoo! does not independently wish to take steps to comply more fully with the French court's orders, it states that it fears that it may be subject to a substantial (and increasing) fine if it does not. Yahoo! maintains that in these circumstances it has a legally cognizable interest in knowing whether the French court's orders are enforceable in this country.
2. The French court did not ask for restrictions on US citizens. If
geolocation filtering works, in other words, the issue is moot:
The underlying theory here is that the worldwide scope of a website is not a given.
3. Maybe Yahoo is ok in France.
(Note, however, that the uncertainty still hangs over Yahoo.)
At other points, Judge Fletcher uses the fact that neither LICRA nor UEJF
have taken further steps as additional evidence that there is no "active
controversy". Another sentence along this line is
And here's the kicker, dismissing the "chilled speech" issue:
The First Amendment applies in the US, not in France. Not that Judge Fletcher doesn't get this:
That, of course, was due to Yahoo's ethical
decision not to allow the sale of hate materials.
Judge Fletcher then states
The first phrase here, about French
users, was omitted by some sites that reported on the decision [including
me -- pld]; that omission decidedly changes Fletcher's meaning, which is
that the First Amendment does not necesarily protect French
Fletcher concludes with the following, implicitly addressing Yahoo's issue that they were still allowing the sale of Mein Kampf in violation of the French orders:
These issues led to the declaration of non-ripeness.
This is a JURISDICTIONAL case
that was left undecided, officially, though the Ninth Circuit certainly
hinted that France did not have authority to demand restrictions on US
At about the same time, there was growing improvement in advertising-based geolocation software (IP addr -> location); the earlier blocking estimates rose from 70% to well over 90%.
Although the company has granted some of the requests, delisting was only carried out on European extensions of the search engine and not when searches are made from "google.com" or other non-European extensions.
In accordance with the CJEU judgment, the CNIL considers that in order to be effective, delisting must be carried out on all extensions of the search engine and that the service provided by Google search constitutes a single processing.Google was given fifteen days to respond.
Google appealed in May 2016. See their statement, in which they say "as a matter of both law and principle, we disagree with this demand".
The CNIL agreed in 2017 to refer the question to the European Court of Justice (ECJ). A decision is not expected before 2019.
The Supreme Court of Canada ruled in June 2017 against Google.
After Equustek won in Canada, Google asked a US court for a declaratory judgement that the Canadian court's order was not enforceable in the United States. A federal district court in California granted this in November 2017. (http://jolt.law.harvard.edu/digest/google-v-equustek-united-states-federal-court-declares-canadian-court-order-unenforceable).
It is not clear what will happen next. Perhaps the issue will be settled as part of the NAFTA reconsideration. Perhaps Canada will refuse to let Canadians advertise with Google. But that is not likely to help Canada any.
Sec. 8‑902. Definitions.(a) "Reporter" means any person regularly engaged in the business of collecting, writing or editing news for publication through a news medium on a full‑time or part‑time basis . . . .
Here is the essential problem:
This is a significant issue in the "free speech" of employees. Note how
giving providers an easy way to get libel cases dismissed via summary
judgment makes this strategy for corporations much more difficult.
Supposedly Apple employees are fired if they write about Apple online
In 2004, some bloggers announced new Apple rumors. In this case,
apparently the rumors were accurate, and involved inside information from
Apple employees. Apple sued, in the case Apple
v Does, for the identities of the insiders. Apple argued in court
that bloggers were not covered by the California shield law, and that even
if they were they must still divulge the identities of their contacts. The
trial court ruled in Apple's favor in 2005; the California Court of
Appeals reversed in 2006. From the 2006 decision:
Note that the issue here is the use of the legal system to find identities of anonymous posters. Baase has an entire section on anonymity (4e §3.4 / 5e §3.5).
What about employee bloggers?
Well, is it? On the one hand it is an expressive medium; on the other, source code has a functional quality absent from political rants. You can compile it and it does something.
Cases where it's been debated:
For a while, the NSA (National Security Agency) tried very hard to block even publication of scientific papers. They would issue "secrecy orders".
But eventually the government's weapon of choice was ITAR: International Trade in Armaments Regulations
Suppose you make F-16 fighters. You need a munitions export permit to sell these oversees. What about if you make open-source encryption software? You need the same kind of permit! Even if you GIVE IT AWAY!!
BOOKS were exempt. The rule applied only to machine-readable forms. For
a while, there was a machine-readable T-shirt with the RSA encryption
algorithm on it.
Discussion: does it make any
sense to ban the online source code, if a book in which the same code is
printed can be freely distributed?
Phil Zimmermann released PGP ("Pretty Good Privacy") as an open-source
project in 1991. The government made him promise not to do it again.
Zimmermann's associates outside the US released the next version.
Zimmermann was under indictment for three years, but charges were
PGP later became a commercial software company, but not before aiding in
the creation of the OpenPGP standard (and allowing that use of the PGP
name). The open-source version is now GPG (Gnu Privacy Guard).
In 1994 Bruce Schneier wrote a textbook Applied
Cryptography. All the algorithms were printed, and also included
verbatim on a 3.5" floppy disk in the back of the book. Phil Karn (of
Karn's Algorithm for estimating packet RTT times) applied for an export
license for the package, also in 1994. It was granted for the book
(actually, the book needed no license), but denied for the floppy.
Discussion: does this make sense?
Some of Karn's notes are at http://www.ka9q.net/export.
Daniel Bernstein created a cipher called "snuffle". In 1995, while a graduate student at UC Berkeley, he sued to be allowed to publish his paper on snuffle and to post it to the internet. In 1997 the district court ruled in his favor. In 1999 a 3-judge panel of the 9th circuit ruled in his favor, although more narrowly. Opinion of Judge Betty Fletcher:
Prior-restraint was one issue
Bernstein's right to speak is the issue, not foreigners' right to hear
But does source code qualify? see p 4232: C for-loop; 4233: LISP
Snuffle was also intended, in part, as political expression. Bernstein
discovered that the ITAR regulations controlled encryption exports, but
not one-way hash functions such as MD5 and SHA-1. Because he believed that
an encryption system could easily be fashioned from any of a number of
publicly-available one-way hash functions, he viewed the distinction made
by the ITAR regulations as absurd. To illustrate his point, Bernstein
developed Snuffle, which is an encryption system built around a one-way
hash function. (Arguably, that would now make Snuffle political
speech, generally subject to the fewest restrictions!)
Here is Judge Fletcher's main point:
Thus, cryptographers use source code to express their scientific ideas in much the same way that mathematicians use equations or economists use graphs. Of course, both mathematical equations and graphs are used in other fields for many purposes, not all of which are expressive. But mathematicians and economists have adopted these modes of expression in order to facilitate the precise and rigorous expression of complex scientific ideas.13 Similarly, the undisputed record here makes it clear that cryptographers utilize source code in the same fashion.
Government argument: ok, source code might be expressive, but you can also run it and then it does something: it has "direct functionality"
Fletcher: source code is
meant, in part, for reading. More importantly, the idea that it can be
banned due to its "direct functionality" is a problem: what if a computer
could be ordered to do something with spoken commands? Would that make
speech subject to restraint? In some sense absolutely
yes; if speech became action then it would be, well, actionable
(that is, something that could be legally prohibited).
In 1999, the full 9th Circuit agreed to hear the case; it was widely expected to make it to the Supreme Court.
But it did not. The government dropped the case.
The government also changed the ITAR rules regarding cryptography. Despite these changes, Bernstein continued to appeal. In 2003 a judge dismissed the case until such time as the government made a "concrete threat".
Peter Junger was prof at Case Western Reserve University. He wanted to teach a crypto course, with foreign students.
The district court concluded that the functional characteristics of source code overshadow its simultaneously expressive nature. The fact that a medium of expression has a functional capacity should not preclude constitutional protection.
Because computer source code is an expressive means for the exchange of information and ideas about computer programming, we hold that it is protected by the First Amendment.
BUT: there's still a recognition of the need for balancing:
There are several; the best known is Universal Studios v Reimerdes, Corley, and Kazan. Eric Corley, aka Emmanuel Goldstein, is the publisher of 2600 magazine. In 2000 the magazine included an article about a new program, "deCSS", that removed the CSS ("content-scrambling system") encryption from DVDs, thus allowing them to be copied to hard disks, converted to new formats, and played on linux systems.
DeCSS was developed in ~1999, supposedly by Jon Lech Johansen. He wrote it with others; it was released in 1999 when Johansen was ~16. He was tried in Norway in 2002, and was acquitted.
Cute story about Jon: In 2005, supposedly Sony
stole some of his GPL-covered code for their XCP "rootkit" project. Jon
might have been able to sue for huge damages (though the usual
RIAA-lawsuit standard is based on statutory damages per item copied, and
here Sony might argue only one thing was copied). More at http://news.slashdot.org/story/05/11/17/1350209/dvd-jons-code-in-sony-rootkit
Jon Lech Johanson
Judge Kaplan memorandum, Feb 2000, in Universal v Reimerdes:
As a preliminary matter, it is far from clear that DeCSS is speech protected by the First Amendment. In material respects, it is merely a set of instructions that controls computers.
He then goes on to consider the "balancing" approach between free speech and regulation, considering the rationale for the regulation and the relative weights of each side.
The computer code at issue in this case does
little to serve these goals [of expressiveness]. Although this Court has
assumed that DeCSS has at least some
expressive content, the expressive aspect appears to be minimal when
compared to its functional component. Computer code primarily is
a set of instructions which, when read by the computer, cause it to
function in a particular way, in this case, to render intelligible a data
file on a DVD. It arguably "is best treated as a virtual machine . . . ."
[the decision cites Lemley & Volokh, Freedom of Speech and Injunctions in Intellectual Property Cases, Duke Law Journal 1998. However, the sentence in Lemley and Volokh's paper explicitly refers to executable object code, not source! "The Bernstein court's conclusion, even if upheld, probably doesn't extend past source code to object code, however. We think most executable software is best treated as a virtual machine rather than as protected expression." Judge Kaplan apparently did not grasp the distinction, though, to be fair, the above quote appeared only in the initial memorandum, and not in the final decision.]
Note that this virtual-machine argument renders irrelevant the Bernstein
precedent! Actually, the virtual-machine argument pretty much presupposes
that you have come down solidly on the side of code-as-function instead of
Also note the weighing of expression versus functionality, with the former found wanting.
As for the free-speech issue, the final decision contains the following language:
It cannot seriously be argued that any form of computer code may be regulated without reference to First Amendment doctrine. The path from idea to human language to source code to object code is a continuum.
The "principal inquiry in determining content neutrality ... is whether the government has adopted a regulation of speech because of [agreement or] disagreement with the message it conveys." The computer code at issue in this case, however, does more than express the programmers' concepts. It does more, in other words, than convey a message. DeCSS, like any other computer program, is a series of instructions that causes a computer to perform a particular sequence of tasks which, in the aggregate, decrypt CSS-protected files. Thus, it has a distinctly functional, non-speech aspect in addition to reflecting the thoughts of the programmers.
What do you think of this idea that the DMCA is "non-content-based" regulation? What would you say if someone claimed deCSS was intended to express the view that copyright was an unenforceable doctrine?
Do you think that Judge Kaplan was stricter here than in the crypto cases
because crypto was seen as more "legitimate", and deCSS was clearly
intended to bypass anticircumvention measures?
The district court issued a preliminary injunction banning 2600.com from hosting deCSS; the site then simply included links to other sites carrying it. The final injunction also banned linking to such sites. Furthermore, the decision included language that equated linking with anti-circumvention trafficking.
The Appellate decision was similar to Judge Kaplan's District Court
opinion, though with somewhat more on the constitutional issues, and an
additional twist on linking. Also, note that one of Corley's defenses was
that he was a journalist, and
However, in full context, that idea was harder to support. Corley's
mistake was in describing DeCSS as a
way to get free movies. What if he had
stuck to the just-the-facts approach, and described exactly how easy it
was to copy DVDs without actually urging you to do it? Is this similar to
the theoretical "Grokster" workaround?
Both the DC and Appellate courts held that the DMCA targets only the
"functional component" of computer speech.
One argument was that the CSS encryption makes Fair Use impossible, and
that therefore the relevant section of the DMCA should be struck down. The
appellate court, however, ruled instead that "Subsection 1201(c)(1)
ensures that the DMCA is not read to prohibit the 'fair use' of
information just because that
information was obtained in a manner made illegal by the DMCA".
Subsection 1201(c)(1) reads
(c) Other Rights, Etc., Not Affected. — (1) Nothing in this section shall affect rights, remedies, limitations, or defenses to copyright infringement, including fair use, under this title.
In other words, while the DMCA can make Fair Use impossible, it remains an affirmative defense against charges of infringement. This is an interesting argument by the court! Literally it is correct, but the practical problems with Fair Use access go unaddressed.
There is also, though, another issue: Corley was not being charged with infringement. Literally speaking, Fair Use is not an affirmative defense against charges of violating the DMCA anticircumvention issue.
Some notes on the free-speech argument:
The court also acknowledged Junger v Daley (above).
That is, the DeCSS code may be said to be "expressive speech", but it is
not being banned because of what it expresses.
As for hyperlinks (in the section "Linking"),
What if one simply printed the site name, without
the link: eg cs.luc.edu? For links, one can argue that the expressive and
functional elements -- what the other site is, and how to get there -- are
The non-linking rule may become more of an issue as time goes on and the
US attempts to remove from the DNS system sites which provide illegal
access to copyrighted material. In the future, identifying a new IP
address for, say, the now-seized megaupload.com may be suspicious.
Ironically, nobody uses deCSS any more. You can get the same effect with Videolan's VLC player, originally offered as a Linux DVD player but now also popular for Windows and Macs.
Furthermore, as of Windows 8, Microsoft will no longer supply a free DVD player with windows. So VLC is it.
How did this come about? MS, after all, introduced protected processes into Windows 7, under pressure from the media industry, specifically to prevent attaching debuggers to read things like embedded CSS decryption keys.
The MS issue turns out to be the MPEG-2 patent-licensing fee. It's $2.00 per device, according to http://www.mpegla.com/main/programs/M2/Pages/Agreement.aspx:
libdvdcss is a library that can find and guess
keys from a DVD in order to decrypt it.
This method is authorized by a French law decision CE 10e et 9e soussect., 16 juillet 2008, n° 301843 on interoperability.
Gallery of DeCSS: http://www.cs.cmu.edu/~dst/DeCSS/Gallery
Check out these in particular:
Does the entire gallery serve to establish an expressive purpose?Lower down appears some correspondence between Touretzky and the MPAA.