Hacking and Computer Crime
Andrew "Weev" Auernheimer
To some of you, hacking is clearly
wrong and there shouldn't even be a question here. If you're one
of them, just pay attention to the legal-strategies-against-hackers part.
However, is using a website in a manner contrary to the provider's
intentions always hacking? A more serious example is logging on to a site,
but not changing anything and in particular not committing theft.
Baase's "three phases of hacking"
1. Early years: "hacking" meant "clever programming"
2. ~1980 to ~1995:
hacking as a term for break-in
phone lines, BBSs, gov't systems
lots of social
engineering to get passwords
1994 Kevin Mitnick Christmas Day attack on UCSD (probably not carried out
by Mitnick personally), launched from apollo.it.luc.edu.
3. post-1995: hacking for money
early years / trophy
Phone phreaking: see Baase, 4e p 232, 247
Joe "The Whistler"
Engressia was born blind in 1949, with perfect pitch. He discovered
(apparently as a child) that, once a call was connected, if you sent a
2600 Hz tone down the line, the phone system would now let you dial a new
call, while continuing to bill you for the old one. Typically the first
call would be local and the second long-distance, thus allowing a
long-distance call for the price (often zero) of a local call. Engressia
could whistle the 2600 Hz tone.
According to the wikipedia article on John
Draper, Engressia also discovered that the free whistle in "Cap'n
Crunch" cereal could be modified to produce the tone; Engressia shared
this with Draper who popularized it. Draper took the nickname "Cap'n
As an adult, Engressia wanted to be known as "Joybubbles"; he died August
Draper later developed the "blue box" that would generate the 2600 Hz
trunk-line-idle tone and also other tones necessary for dialing.
How do we judge these people today? At the time, they were folk heroes.
Everyone hated the Phone Company!
Is phone-phreaking like file sharing? Arguably, there's some public
understanding now that phone phreaking is wrong. Will there later be a
broad-based realization that file-sharing is wrong?
How wrong is what they did? Is
there a role for exposing glitches in modern technology?
From Bruce Sterling's book The Hacker
Crackdown: Law and Disorder on the Electronic Frontier, mit.edu/hacker:
What did it mean to break into a computer
without permission and use its computational power, or look around inside
its files without hurting anything? What were computer-intruding hackers,
anyway -- how should society, and the law, best define their actions? Were
they just browsers,
harmless intellectual explorers? Were they voyeurs,
snoops, invaders of privacy? Should they be sternly treated as potential agents of espionage, or perhaps
as industrial spies? Or were they
best defined as trespassers,
a very common teenage misdemeanor? Was hacking theft
of service? (After all, intruders were getting someone else's
computer to carry out their orders, without permission and without
paying). Was hacking fraud? Maybe
it was best described as impersonation.
The commonest mode of computer intrusion was (and is) to swipe or snoop
somebody else's password, and then enter the computer in the guise of
another person -- who is commonly stuck with the blame and the bills.
What about the Clifford Stoll "Cuckoo's Egg" case: tracking down an
intruder at Berkeley & Livermore Labs; Markus Hess was a West German
citizen allegedly working for the KGB. Hess was arrested and eventually
convicted (1990). Berkeley culture at that time was generally to tolerate
Robert Tappan Morris (RTM) released his Internet worm in 1988; this was
the first large-scale internet exploit. Due to a software error, it
propagated much more
aggressively than had been intended, often consuming all the available
CPU. It was based on two vulnerabilities: (1) a buffer overflow in the
"finger" daemon, and (2) a feature [!] in many sendmail versions that
would give anyone connecting to port 25 a root shell if they entered the
secret password "wiz".
Were Morris's actions wrong? How wrong? Was there any part that was
legitimate? RTM was most likely trying to boost his academic reputation by
discovering a security vulnerability. There was no financial incentive.
The jury that convicted him spent several hours discussing Morris's
argument that when a server listened on a port (eg an email server
listening on port 25), anyone was implicitly authorized to send that port
anything they wanted. That is, it
is the server's responsibility to filter out bad data. While the jury
eventually rejected this argument, they clearly took it very seriously.
Morris went on to become a professor at MIT.
Mitnick attack: how much of a problem was that, after all? There are
reports that many Mitnick attacks were part of personal vendettas. (Most
of these reports trace back to John Markoff's book on Mitnick; Markoff is
widely believed to have at a minimum tried to put a slant on the facts
that would drive book sales.)
Stage 3: even now, not all
attacks are about money.
Baase, 3e p 259:
"In 1998, the US Deputy defense secretary desribed a series of attacks on US
military computers as 'the most organized and systematic attack the Pentagon
has seen to date.' Two boys, aged 16 and 17, had carried them out."
What about the London attack of about the same era on air-traffic control?
2000: the "Love Bug" or ILOVEYOU virus, probably by a pair of Phillipine
programmers Reonel Ramones and Onel de Guzman. If you read the subject and
opened the document, an MS-word macro launched the payload, which would then
send the virus on to everyone in your address book.
MS-word macros were at the time (and still are) an
appallingly and obviously bad
idea. Should people be punished for demonstrating this in such a public way?
Was there a time when such a demonstration might have been legitimate?
Some Loyola offices still create forms as word documents with macros, and
expect the rest of us to fill them out.
Yahoo ddos attack & mafiaboy, aka
The attack was launched in February 2000. Calce got discovered by bragging
about the attack pseudonymously on chatrooms. Alas for him, he had
previously used his pseudonym "mafiaboy" in posts that contained
Conficker worm, April 1, 2009, apparently about creating a network of email
Putting a dollar value on indirect attacks
This is notoriously hard. One of Mitnick's colleagues was facing damage
claims from one of the Baby Bell companies in excess of $100,000, when it
was pointed out during the trial that the stolen document was in fact for sale for under $25.
Mark Abene (Phiber Optik) was imprisoned for twelve months. That was rather
long for the actual charge. Mitnick himself spent nearly five years in
prison, 4.5 of which were pre-trial.
That situation is similar to that of Terry Childs in San Francisco, now
finally out of prison.
Calce, Abene & Mitnick all now work in computer security. Is this
appropriate? Of course, if you believe the charges themselves were
inappropriate, you might readily agree.
One theory is that gaining notoriety for an exploit is the way
to get a security job. Is that appropriate?
If not, what could be done differently?
David Kernell hacked Sarah Palin's email account in 2008, at age 20 (this
was the case where we earlier watched Bill O'Reilly declaim
about the equivalence of physical and intellectual property). He served his
366-day sentence at the Midway Rehabilitation Center in Tennessee, and was
allowed to continue at school while in jail. Was this an appropriate
As of ~2012, most computer attacks are launched via web pages, although I
still get lots of emailed virus payloads such as IRSnotice.pdf.exe or
russianmodel.jpg.exe (under windows, the final ".exe" is not shown by
default; why does Microsoft still
am a devoted NoScript user. However,
there are other vulnerabilities too.
Once upon a time, authorities debated charging a hacker for the value of
electricity used; they had no other tools. The relative lack of legal
tools for prosecution of computer breakins persisted for some time.
The Computer Fraud & Abuse Act
of 1986 made it illegal to access computers without authorization (or to
commit fraud, or to get passwords). Robert Tappan Morris was the first
person convicted under this law.
USAP AT RIOT act:
Extends CFAA, and provides that when totting up the cost of the attack,
the victim may include all costs of response and recovery. Even
unnecessary or irresponsible costs. Even costs they should have already
"Trespass of Chattels": maybe.
This is a legal doctrine in which one party intentionally interferes with
another's chattels, essentially
personal property (including computers). Often actual harm need not be
proven, just that the other party interfered, and that the interference
was intentional and without authorization.
In 2000 e-bay won a case against Bidder's
Edge where the latter used search robots to get information on
e-bay auctions. The bots used negligible computation resources. The idea
was for Bidder's Edge to sell information to those participating in eBay
auctions. In March 2001, Bidder's Edge settled as it went out of business.
Later court cases have often required proof of actual harm, though. In
1998 [?], Ken Hamidi used the Intel email system to contact all employees
regarding Intel's allegedly abusive and discriminating employment
policies. Intel sued, and won at the trial and appellate court levels. The
California Supreme Court reversed in 2003, ruling that use alone was not
sufficient for a trespass-of-chattels claim; there had to be "actual or
After reviewing the decisions analyzing
unauthorized electronic contact with computer systems as potential
trespasses to chattels, we conclude that under California law the tort
does not encompass, and should not be extended to encompass, an
electronic communication that neither
damages the recipient computer system nor impairs its functioning.
Such an electronic communication does not constitute an actionable
trespass to personal property, i.e., the computer system, because it does
not interfere with the possessor's use or possession of, or any other
legally protected interest in, the personal property itself. [emphasis
How do you prosecute when there is no attempt to damage anything?
Part of the problem here is that trespass-of-chattels was a doctrine
originally applied to intrusions,
and was quickly seized on as a tool against those who were using a website
in ways unanticipated by the creator (eg Bidder's Edge). Is that illegal?
Should the law discourage that? Should website owners be able to dictate
publicly viewable pages (ie pages where a login is not required)?
International Airport Centers v Citrin
Generally the Computer Fraud & Abuse Act (CFAA) is viewed as being
directed at "hackers" who break in to computer systems. However, nothing in
the act requires that a network breakin be involved, and it is clear that
Congress understood internal breakins to be a threat as well. The law itself
dates from the era of large mainframes.
Just when is internal access a violation of the CFAA? Internal access is
what Terry Childs is accused of.
In the 2006 Citrin case, the
defendant deleted files from his company-provided laptop before quitting his
job and going to work for himself. From http://technology.findlaw.com/articles/01033/009953.html:
Citrin ultimately decided to quit and go
into business for himself, apparently in breach of his employment contract
with the companies. Before returning the laptop to the companies, Citrin
deleted all of the data in it, including not only the data he had
collected [and had apparently never turned over to his employer -- pld],
but also data that would have revealed to the companies improper conduct
he had engaged in before he decided to quit. He caused this deletion using
a secure-erasure program, such that it would be impossible to recover the
His previous employer sued under the CFAA, noting that the latter contained
a provision allowing suits against anyone who "intentionally causes damage
without authorization to a protected computer". Citrin argued that he had authorization to use his
company-provided laptop. The District Court agreed. The Seventh Circuit
(which includes Illinois) reversed, however, arguing in essence that once
Citrin had decided to leave the company, and was not acting on the company's
behalf, his authorization ended.
Or (some guesswork here), Citrin's authorization was only for work done on
behalf of his employer; work done against
the interests of his employer was clearly not authorized.
Note that Citrin's specific act of
deleting the files was pretty clearly an act that everybody
involved understood as not what his employer wanted. This is not
a grey-area case in that regard. However, trade-secrecy laws might also
apply, as might contract law if part of Citrin's employment contract spelled
Compare this to the Terry Childs or Randal Schwartz cases, below. We don't
have all the facts yet on Childs, but on a black-and-white scale these cases
would seem at worst to be pale eggshell (that is, almost white). It seems
very likely that Schwartz's intent was always to improve
security at Intel; it seems equally likely that at least in the three
modem-related charges against Childs there was absolutely no intent to
undermine city security.
Once again, the court looked at Citrin's actions in broad context, rather
than in narrow technological terms. However, it remains unclear whether the
court properly understood the full implications. In the context of the
Citrin case, the Seventh Circuit simply allowed a civil lawsuit based on the
CFAA to go forward. But the CFAA also criminalizes
exactly the same conduct that it allows as grounds for civil suits.
Specifically, §1030 states:
accesses a computer without authorization or exceeds authorized
access, and thereby obtains
from any protected computer [a computer " which
is used in or affecting interstate or foreign commerce or
communication"; ie any computer on the
Internet -- pld]
(b) ... shall be punished as provided [below]
(c) (1) (A) ... imprisonment for not more than ten
years [plus a fine].
I'm not sure if that's ten years total or ten years per offense.
There was no felony prosecution of Citrin, but consider the following
unauthorized uses of a computer:
Should a person be subject to felony
charges for any of the above?
- Use of Google.com (even for searching) by a minor, prior to March 1,
2012 (when the Google ToS changed)
- Personal web browsing while at work, if the workplace prohibits such
- Creating a Facebook account under a pseudonym.
US v Nosal
In an en banc decision handed down
April 10, 2012 by the Ninth Circuit, the court ruled that someone who was
authorized to access the data in question could not
be charged under the CFAA simply because that access was contrary to the
terms of the data owner (ie the employer). This is in more-or-less direct
conflict with the Seventh Circuit's ruling in Citrin,
suggesting that the Supreme Court is likely to take up this case at some
Nosal, like Citrin, had worked for a company (Korn/Ferry) and left to start
his own business. Nosal did not take K/F data himself, but persuaded some
former colleagues to send him the data. The colleagues were also charged.
Part of what is at stake is that the above phrase, "exceeds authorized
access", is used in the rather general section (a)(1), but also in section
(a)(4) dealing with fraud. Nosal was originally charged under §(a)(4), and
other courts have ruled that fraud based on unauthorized access is indeed
covered. However, the language in both sections is the same, and a general
legal principle is that you should not interpret language differently simply
because the context is different.
Judge Kosinski, in his decision,
 The CFAA defines "exceeds authorized
access" as "to access a computer with authorization and to use such access
to obtain or alter information in the computer that the accesser is not
entitled so to obtain or alter." 18 U.S.C. § 1030(e)(6). This language can
be read either of two ways: First,
as Nosal suggests and the district court held, it could refer to someone
who's authorized to access only certain data or files but accesses
unauthorized data or files: what is colloquially known as "hacking." For
example, assume an employee is permitted to access only product
information on the company's computer but accesses customer data: He would
"exceed[ ] authorized access" if he looks at the customer lists. Second,
as the government proposes, the language could refer to someone who has
unrestricted physical access to a computer, but is limited in the use to
which he can put the information. For example, an employee may be
authorized to access customer lists in order to do his job but not to send
them to a competitor.
Kosinski then argued that the second interpretation is much too broad:
[W]e hold that the phrase "exceeds
authorized access" in the CFAA does not extend to violations of use
restrictions. If Congress wants to incorporate misappropriation liability
into the CFAA, it must speak more clearly. The rule of lenity requires
"penal laws . . . to be construed strictly."
Ultimately, Kosinski's argument would suggest that if a site or employer did
not want you to have access to some data, they should take measures to be
sure you cannot access it routinely.
See also Volokh's
US v Van Buren
Georgia police officer Van Buren accessed a state police database, and
sold information he obtained there. In 2021 the Supreme Court decided [https://www.supremecourt.gov/opinions/20pdf/19-783_k53l.pdf]
that his use did not "exceed authorized access", because
he had authorized access to the system. Justice Barrett wrote:
This provision covers those who obtain
information from particular areas in
the computer—such as files, folders, or databases—to which
their computer access does not extend. It does not cover those who,
like Van Buren, have improper motives for obtaining information that is
otherwise available to them
That would certainly seem to suggest Citrin's actions would be legal in
the future. It also would appear to suggest that terms-of-service
violations are not criminal.
However, the case does not entirely answer when someone does
exceed authorized access. Is all access via the standard user interface of
necessity "authorized"? What if someone uses the standard API for
accessing the system, but in a way not anticipated by the owner, as in the
Auernheimer case below?
Craigslist v 3Taps, Craigslist v PadMapper
This is another CFAA case that in some ways resembles E-bay v Bidders Edge.
3Taps was a company that scraped Craigslist data, and also data from other
sites, to create specialized search engines for for-sale content. PadMapper
collected Craigslist housing ads and organized them visually on a map.
Craigslist sent both of them "cease-and-desist" letters (that is, letters
asking them to stop using craigslist.com) and blocked some IP addresses used
by each company. The cease-and-desist letters were not based on any
particular law (unlike, for example, DMCA takedown notices), but were simply
formal requests from Craigslist to stop using their website.
3Taps continued to access craigslist.com through proxies; PadMapper started
obtaining Craigslist data from 3Taps.
The central ruling of the lawsuit, reached in 2013, was that the letter and
blocking together was sufficient to establish that 3Taps' and PadMapper's
continued use of craigslist.com was unauthorized under the CFAA, and that
Craigslist could sue.
Note that no login, account creation or terms-of-service agreement is
necessary to access Cragslist's data. In effect, 3Taps and PadMapper were
being told that this publicly available data was available to everyone except
them; they were singled out by the cease-and-desist letters.
Having lost this crucial ruling, 3Taps settled the case, agreeing to pay
$1,000,000 and to stop using craigslist.com data. Part of the agreement was
that the money would be turned over to the EFF over ten years.
3Taps had argued that, because craigslist.com was a public website, it had
authorization to access it as a member of the public. The court disagreed
with this, stating that the cease-and-desist letters had the effect of
revoking 3Taps' and PadMapper's authorizations.
The cease-and-desist letters were sent in June 2012. In July 2012, Craiglist
changes their terms-of-service to disallow 3Taps' and PadMapper's actions;
initially, Craigslist claimed copyright on all user postings (and thus
became entitled to file copyright-infringement lawsuits against anyone who
copied the postings).
Faced with copyright-infringement litigation, 3Taps and PadMapper were
forced to settle.
The case raises a tricky question of just who is allowed to access "public"
data. There are also questions as to whether the case here amounted to a
restriction on the use of public data, rather than just the collection.
If you post publicly on Twitter, should Twitter be able to claim copyright?
Should you have any privacy objections if someone else re-publishes the
hiQ v LinkedIn
hiQ labs scraped public user profiles from LinkedIn. LinkedIn objected,
and sent a cease-and-desist letter. From LinkedIn's perspective, at that
point hiQ's access became unauthorized, and thus a violation of
The district court found in hiQ's favor, and granted a preliminary
injunction. LinkedIn appealed that injunction to the Ninth Circuit, which
upheld the injunction in September 2019.
Technically, the case has not been tried on the merits, even at the
district court level. But, realistically, it may be time for LinkedIn to
One approach is for LinkedIn to hide all user profiles until
the viewer has logged in to LinkedIn and thus presumptively accepted the
LinkedIn terms of service. Of course, this may not be popular with users
who want their LinkedIn profiles to be highly visible.
In June 2021, following the Van Buren decision, the Supreme
Court vacated and remanded the Ninth Circuit's decision for further review
under the Van Buren standard. In one sense, hiQ, certainly was
accessing the data using the standard user interface, though there were
terms-of-service issues. In another sense, hiQ was definitely accessing
the data in a way unanticipated by LinkedIn.
Attacks Involving Money
Modern phishing attacks (also DNS attacks)
Stealing credit-card numbers from stores. (Note: stores are not supposed
to retain these, except in special circumstances. However, many do. And
Target did not; their data was stolen "on the fly")
Boeing attack, Baase 4e, p 235: how much should
Boeing pay to make sure no files were changed? Is there a real safety
TJX attack: Baase 4e p 54 and p
This was the biggest credit-card attack, until it was dwarfed by the
Target attack in 2013. (Though by the time of the Target attack, the
credit-card companies had become much more adept at detecting fraud
patterns and thus limiting the number of stolen cards that could actually
The break-in was discovered in December 2006, but may have gone back to
40 million credit-card numbers were stolen, and 400,000 SSNs, and a large
number of drivers-license numbers.
Hackers apparently cracked the obsolete WEP encryption on wi-fi networks
to get in to the company's headquarters network, using a "cantenna" from
outside the building. Once in, they accessed and downloaded files. There
are some reports that they eavesdropped on data streaming in from stores,
but it seems likely that direct downloads of files was also involved.
Six suspects were eventually arrested. I believe they have all now been
convicted; there's more information in the privacyrights.org page below
(which also pegs the cost to TJX at $500-1,000 million). The attacks were
apparently masterminded by Albert
Gonzalez, one of the six: http://www.cio.com/article/500114/Alleged_Kingpin_of_Data_Heists_Was_a_Computer_Addict_Lawyer_Says.
Gonzalez was sentenced to 20 years, though part of that was for other
For a case at CardSystems Solutions,
Here the leak was not due to wi-fi problems, but lack of compliance with
standards was apparently involved. Schneier does a good job explaining the
purely contractual security requirements involved, and potential outcomes.
Schneier also points out
Every credit card company is terrified that
people will reduce their credit card usage. They're worried that all of
this press about stolen personal data, as well as actual identity theft
and other types of credit card fraud, will scare shoppers off the
Internet. They're worried about how their brands are perceived by the
The TJX and CardSystems attacks were intentional,
not just data gone missing.
When attacks ARE about money, often the direct dollar value is huge. And
tracing what happened can be difficult. An entire bank account may be
gone. Thousands of dollars may be charged against EVERY stolen credit-card
Here's a summary of several incidents: http://www.privacyrights.org/ar/ChronDataBreaches.htm#CP.
TJX attack and PCI DSS
An emerging standard is Payment
Card Industry Data Security Standard (PCI DSS), supported by
MasterCard, Visa, Discover, American Express, and others. See http://www.pcicomplianceguide.org/pcifaqs.php
for some particulars; a more official site is https://www.pcisecuritystandards.org.
Note that PCI DSS is not a law, but is "private regulation". Once upon a
time, the most effective regulators of steam-powered ships were insurance
companies [reference?]. This is similar, but MasterCard and Visa are not
quite the same as insurers. From the FAQ above:
Q: What are the penalties for
The payment brands may, at their discretion, fine an
acquiring bank $5,000 to $100,000 per month for PCI compliance violations.
The banks will most likely pass this fine on downstream till it eventually
hits the merchant. Furthermore, the bank will also most likely either
terminate your relationship or increase transaction fees. Penalties
are not openly discussed nor widely publicized, but they can catastrophic
to a small business.
It is important to be familiar with your merchant account agreement,
which should outline your exposure.
If you are a store, you can refuse to pay the fine. But then you will
lose the ability to accept credit cards. This is extremely bad!
Visa's CISP program is described at http://www.visa.com/cisp.
The PCI standards do allow merchants to store the name and account-number
data. However, this is strongly
discouraged (although it is becoming more acceptable). Sites that
keep this information are required by PCI to have it encrypted.
CardSystems was keeping this data because they were having a
higher-than-expected rate of problems with transactions, and they were
trying to figure out why.
To some extent, PCI DSS compliance is an example of how ethical behavior is
in your own long-term best interest.
Although Target has yet to reveal many details about the theft of 70 million
credit-card numbers, apparently much of the attack was carried out
It appears that malware was installed on point-of-sale terminals, which
basically run versions of Windows. Target apparently hasn't even admitted
this much, but those who made online purchases were not affected, and the
POS terminal appears to be the only difference. However, the attackers also
obtained name/address/email information, which would have had to come from
somewhere else internally.
Hackers got in to the Target network, possibly through an HVAC vendor, Fazio
Mechanical. Fazio was given credentials on Target's internal network for "
electronic billing, contract submission and project management" (http://krebsonsecurity.com/tag/target-data-breach/
Target apparently stored the CVV codes from the cards. This is a big PCI-DSS
According to https://www.schneier.com/blog/archives/2014/03/details_of_the_.html,
Target had alert systems (from FireEye)
sound warnings as early as November 30, 2013. But nobody noticed; alert
systems are notorious for false positives. The problem was not announced
until December 19.
Here's another amazing article on this by Brian Krebs, in which he
identifies "Rescator" (rescator.so is one of the sites selling stolen Target
cards) as one Andrew Hodirevski: http://krebsonsecurity.com/2013/12/whos-selling-credit-cards-from-target/.
Baase 4e §5.3. What is it? What can be done?
And WHO IS RESPONSIBLE??
The most common form of identity theft is someone posing as you in order
to borrow money in your name, by obtaining a loan, checking account, or
credit card. When someone poses as you to empty your bank account, that's
generally known as "just plain theft".
Note that most "official" explanations of identity theft describe it as
something that is stolen from you; that is, something bad that has
happened to you. In fact, it is probably more accurate to describe
"identity theft" as a validation error made by banks and other lenders;
that is, as a lender problem.
This is a good example of nontechnical people framing
the discourse to make it look like your identity was stolen from
you, and that you are the victim,
rather than the banks for making loans without appropriate checks. And
note that banks make loans without requiring a personal appearance by the
borrower (which would give the bank a chance to check the drivers-license
picture, if nothing else) because that way they can make more
loans and thus be more profitable.
Hacking and probing
Is it ok to be "testing their security"?
What if it's a government site?
Should you be allowed to run a security scanner against other sites?
What if the security in question is APPALLINGLY BAD?
What if you have some
relationship to the other host?
Baase, 3e p 270:
"The Defense Information Systems Agency estimated that there were 500,000
hacker attacks on Defense Department networks in 1996, that 65% of them
were successful, and that the
Dept detected fewer than 1%". But 1996 was a long long time ago.
Do we as citizens have an obligation
to hack into our government's computers, to help demonstrate how insecure
they are? Well, no. But at some level there is
an obligation to expose collective "security through cluelessness" (bad
protocols that most people don't realize are bad).
Actually, the US government has gotten a lot tighter in the past decade, and
somewhere I have a list of IP addresses which, if you portscan, will get
your ISP contacted and may get some US marshals invited to your house.
What about hacking into Loyola's computers? Are we obligated
to do that? What about Loyola's wireless network?
Ok, once upon a time there might have been some notion of an obligation
to inform "friendly" sites that there were problems with their security,
but unsolicited probing is pretty much a bad idea today.
What is our obligation to prevent
intrusions at other sites that are not likely to be directly harmful to
In 2006, Kevin Mitnick's sites were defaced by a group. There is some
Other Baase cases:
- several attacks against Chinese government sites, due to repressive
- pro-Zapatista groups defacing Mexican government sites
- US DoJ site changed to read "Department of Injustice"
Maybe the most famous example right now is the Anonymous
group. See the wikipedia list at http://en.wikipedia.org/wiki/Timeline_of_events_involving_Anonymous.
Most of the attacks have some connection with some form of authoritarian
governmental crackdown, though some of the crackdowns are "only" against
copyright infringement. Occasionally an attack is to harass a particularly
conservative group, as seen from a relatively juvenile perspective (see
the entry in the above wikipedia timeline for "No
Most of the attacks are based on distributed denial-of-sevice methods.
More serious entries:
Operations more focused on censorship might include
- Iranian election protests
- Support of Wikileaks
- Arab Spring support
- Westboro Baptist Church
- Operation Malasia
- Operation DarkNet (arguably an attack against
- Occupy Wall Street
- Operation Nigeria
- Operation Russia
- Operations Didgeridie and Titstorm (about Australian internet
- Operation Sony (in response to Sony's lawsuits against George Hotz)
- Cox DNS server attacks
Can these sorts of activities be justified? What about hacking Sony over
rights to use the Playstation 3 as users see fit?
Should they be tolerated? Encouraged?
- Sometimes vendors ignore exploit reports without the publicity.
- Sometimes users really need a script to tell them if they are
vulnerable; such a script is typically tantamount to an exploit
- Sometimes announcing a flaw gives crackers all they need to exploit
it; withholding details merely gives false security.
Consensus seems to be that zero-day
exploits are a bad idea, that one has some responsibility to let
vendors know about an exploit so a patch can be developed. Though there is
also a fairly significant consensus (perhaps not quite as universal) that
if the vendor doesn't respond you have to do something public.
Microsoft's Patch Tuesday has long been followed by Exploit Wednesday.
Cisco 2005 case involving Michael Lynn:
Cisco threatened legal action to stop the
[July 2005 Black Hat] conference's organizers from allowing a 24-year-old
researcher for a rival tech firm to discuss how he says hackers could
seize control of Cisco's Internet routers, which dominate the market.
Cisco called the disclosure "premature" and claimed Lynn had "illegally
obtained" the information by reverse-engineering. Lynn acknowledged that he
had disassembled some Cisco code, based on an announced Cisco patch, but
found an additional problem that could allow an outsider to take over the
router. Note that a patch had already been released by Cisco, but many
customers had not installed it because Cisco had not indicated it was
Lynn allegedly demoed his findings to Cisco in June 2005. Initially there
had been talk about a joint security presentation, but these broke down. Or
never started; this is not clear. The Black Hat conference was in late July
Lynn pretty much did give his
presentation at Black Hat 2005, somewhat unofficially.
The Cisco lawsuit apparently ended with Lynn agreeing to this day not to
discuss the vulnerability further. An injunction against such discussion was
apparently filed in Federal District Court.
Cisco has never offered an explanation for why they were so upset. It is
safe to assume, however, that the threat was
serious, and that someone within Cisco dropped the ball earlier. Their
official objection was that Lynn violated the EULA by decompiling the code;
generally speaking, as an objection this makes no sense.
At the 2006 Black Hat conference, Cisco was a sponsor. Lynn was apparently
invited to the party the company sponsored, although even today his
relationship with Cisco is frosty.
Schneier also has a 2001 essay on full disclosure (with advance notice to
the vendor) at http://www.schneier.com/crypto-gram-0111.html.
In 2008, three MIT students, Russell Ryan, Zack Anderson, and Alessandro
Chiesa, developed Anatomy of a Subway
Hack (see charlie_defcon.pdf
(especially pages 5, 8, 11/12, 24ff, 41, 49, and 51)). One of the methods of
attack was to take advantage of a vulnerability in the Mifare Classic RFID
chip used by the MBTA's "Charlie Card". They intended to present their
findings at the 2008 Defcon.
US District Judge George O'Toole granted a 10-day preliminary restraining
order against the group, but then let it expire without granting the
five-month injunction requested by the MBTA. The MBTA's legal argument was
that the paper violated the Computer Fraud and Abuse Act, but the problem is
that the CFAA normally applies to worms and viruses themselves, and not
to publishing information about them.
Much of the information in the report is highly embarrassing to the MBTA,
such as the photographs of gates left unlocked. Should they be allowed to
The MIT group apparently asked their professor, Ron Rivest (the R of RSA),
to give the MBTA an advance heads-up, but it apparently did not happen
immediately as Rivest was traveling at the time, and in any event would have
amounted to just a week or so. The MBTA was eventually informed, and quickly
pushed for an FBI investigation.
The MIT group's RFID hack was based on the work of Gans, Hoepman, and Garcia
in finding flaws in the Mifare Classic chipset; see mifare-classic.pdf.
This is a serious academic paper, as you can tell by the font. Their work is
based on earlier work by Nohl and Plötz, which they cite. On page 4 of my
copy the authors state
We would like to stress that we notified NXP
of our findings before publishing our results. Moreover, we gave them the
opportunity to discuss with us how to publish our results without damaging
their (and their customers) immediate interests. They did not take
advantage of this offer.
Note also that the attack is somewhat theoretical, but it does allow them to
eavesdrop on the encrypted card-to-reader communications, and to read all of
data-block 0 stored on the card (and other blocks, if the data is partially
Nohl has said, "It has been known for years that magnetic stripe cards can
easily be tampered with and MBTA should not have relied on the obscurity of
their data-format as a security measure".
(The CTA Chicago Card had many of the same vulnerabilities; this is
presumably one reason for the migration to the Ventra card.)
Buenos Aires and Voting
The city of Buenos Aires uses voting-machine software called "Vot.ar" from
Magic Software Argentina (MSA). Local security researcher Joaquín Sorianello
discovered that the "private" TLS certificates were in fact public. A
different group discovered that a smartphone with NFC capability could add
votes to the RFID chip embedded in the paper ballot (this would be obvious
if the paper and the RFID chip were ever compared, but often only the latter
After Sorianello reported the problem to MSA, local judge María Luisa
More at https://www.eff.org/deeplinks/2015/07/buenos-aires-censors-and-raids-technologists-fixing-its-flawed-e-voting-system.
- ordered local ISPs to block access within BA to Sorianello's
- authorized a police raid on Sorianello's home and seizure of his
Hackers Remotely Kill Jeep Cherokee
Security researchers Charlie Miller and Chris Valasek figured out how to
break into a Jeep Cherokee's engine-control (CAN) network via a cellphone
connection; an intermediate step was to rewrite the firmware of the
entertainment-system head unit. This attack allowed them to:
- change the radio volume and station
- turn on the A/C full blast
- start the wipers
- disengage the transmission
- turn off the engine
- disable the brakes
As of this writing, Miller and Valasek were not able to take over the
steering, unless the car was in reverse.
Miller and Valasek presented their techniques at Black Hat in August
2015. Months before, they notified Chrysler of the problem, which then had
time to prepare a fix.
Chrysler stated that they "appreciated" the work. They also said,
Under no circumstances does [Fiat Chrysler
Automobiles] condone or believe it's appropriate to disclose 'how-to
information' that would potentially encourage, or help enable hackers to
gain unauthorized and unlawful access to vehicle systems.... We appreciate
the contributions of cybersecurity advocates to augment the industry's
understanding of potential vulnerabilities. However, we caution advocates
that in the pursuit of improved public safety they not, in fact,
compromise public safety.
The problem here is that, without the potential for publicity, it is
unlikely Miller and Valasek would have bothered. Academics and independent
security researchers are motivated by publication. If this is discouraged,
security will be left to professional security firms, who to date have not
shown the same willingness to innovate.
More at http://www.wired.com/2015/07/hackers-remotely-kill-jeep-highway/.
Ornig was a student in Slovenia who discovered that police communications
that were supposed to be encrypted often were not, due to software
misconfiguration. He informed the police, but nothing happened. Eventually
he published his results, and was charged. He received a 15-month suspended
sentence, and had to promise not to investigate police misconfiguration any
more. See http://news.softpedia.com/news/student-who-found-flaws-in-police-communication-protocol-gets-prison-sentence-504333.shtml.
An FBI swat team raided Justin Shafer's Texas home on May 24, 2016.
Shafer had just recently exposed a software vulnerability at Harry Schein
Dental Software (www.dailydot.com/politics/dental-records-hack-schein-dentrix-g5-settlement/),
but the raid apparently was the result of Shafer's exposure of an earlier
vulnerability in the Eaglesoft dental software system of Patterson Dental.
Shafer had discovered that Patterson Dental kept protected patient
information on an anonymous FTP server (that is, an FTP server that does
not require a password to access the stored documents). Patterson Dental
claimed to the FBI that Shafer's access of this FTP server was
"unauthorized" and hence a felony under the CFAA. Shafer had earlier
notified Patterson Dental, and had not published his results until the
data was secured, or at least no longer accessible without a password.
More at www.dailydot.com/politics/justin-shafer-fbi-raid/.
A year later, Shafer was arrested, for allegedly "stalking" an FBI agent.
This became five felony counts, but all were eventually dropped. See databreaches.net/prosecution-drops-five-felony-charges-against-justin-shafer-accepts-plea-to-one-misdemeanor-charge.
What legal responses are appropriate?
Should we criminalize having hacking tools?
What about magnetic-stripe readers? RFID readers?
What about Pringles cans (for use as cantennas)?
What about DVD players that bypass the region code?
What about C compilers?
What about jailbroken phones or other "sealed" devices?
Note that it is in fact already de
facto illegal (in the sense that police will arrest you if they
find out, and you belong to a Suspicious Group) to possess certain things
that can have illegal uses, such as automotive dent pullers (used to pull
cylinders out of locks) and tools that look like they might be lock picks.
These may become very frequent if anti-CISPA fears pan out.
High-school students in Kutztown Pennsylvania were issued 600 apple ibooks
The administrative password was part of school address, taped to the back!
The password was changed, but the new one was cracked too. Some of the
students obtained administrative privileges and:
- bypassed browser filtering
- installed chat/IM software, maybe others
- disabled monitoring software
The students were accused of
monitoring teachers or staff, but that seems unlikely.
The school's security model was hopelessly flawed. Who
is responsible for that?
The school argued that the charges were filed because the students signed an
"acceptable use" policy. But why should that make any difference in whether
felony charges were pursued? The students were, after all, minors.
The school simply did not have the resources to proceed properly.
The offenders were warned repeatedly.
But why didn't the schools simply take the iBooks away? Why were felony
charges pursued? The charge was for felony computer trespass.
cutusabreak.org: now gone
Oregon made it a felony to do
anything unauthorized, even if
harm was not shown (or did not exist). Here is the text of part of the
law; note the lack of mention of harm:
(4) Any person who knowingly and without
authorization uses, accesses or attempts to access any computer, computer
system, computer network, or any computer software, program, documentation
or data contained in such computer, computer system or computer network,
commits computer crime.
Also, taking a file without authorization was declared to be theft.
The problem is that, in the real world, authorization is often rather
indirect. If you're doing something for the benefit of your employer, and
your employer does not object, would that always be considered
The biggest issue with the Schwartz case (and he was convicted, not just
charged) is that it seems likely Schwartz had no intent
to cause any harm. In closing arguments the prosecutor focused on the fact
that Schwartz knew he wasn't supposed to be doing what he did, and did it
anyway. Never mind that it might have been done for Intel's benefit. And
never mind that no actual harm was caused.
Schwartz was a contract employee at Intel. He faced three counts:
- Installation of an email backdoor at Intel (he thought he had
- Taking the password file
- Taking individual passwords
Here are the official versions of the latter two charges:
- That the above named defendant(s) on and between August 1, 1993 and
November 1, 1993, in Washington County, Oregon, did unlawfully and
knowingly access and use a computer and computer network for the purpose
of committing theft of the Intel SSD's password file
- That the above named defendant(s) on and between October 21, 1993 and
October 25, 1993, in Washington County, Oregon, did unlawfully and
knowingly access and use a computer and computer system for the purpose
of committing theft of the Intel SSD individual user's passwords
Schwartz had been responsible for SSD system administration and
security, and had monitored the system for weak passwords as part of this
position by using the crack
password-cracking program. In 1992 Schwartz had a conflict with the SSD
manager (Poelitz), and agreed to move on to another position at Intel.
However, he continued to monitor the SSD passwords, as it was clear to him
that the new system administrator was not doing so (it was particularly
clear after the fact: 48 out of 600 passwords were easily broken).
Schwartz did not need any elevated privileges to monitor the
passwords. (Supposedly Schwartz's access to the SSD network was supposed
to have been disabled, but it was not, and there was no reason for
Schwartz to believe that continued access was a problem.)
Schwartz's password-cracking actions have been described by Wikipedia as
"penetration testing", but this is a bit of a misnomer as he didn't
penetrate the systems involved at all. When weak passwords were
discovered, he would eventually notify the user or the applicable system
administrator, though sometimes there was delay. There was never, however,
any evidence that Schwartz ever misused any of the passwords, or ever
intended to. It seems clear, both at the time and in retrospect, that
never had any intent to cause any harm at Intel, but that in fact his
intent had been to prevent harm
at Intel, by continuing to monitor for weak passwords. This turned out not
As for the email backdoor in the first charge, here are some comments
from Jeffrey Kegler's comments at lightlink.com/spacenka/fors/intro.html.
Randal's original reason for writing a gateway
was a request from Dave Riss's staff at Intel, who needed to access their
data and E-mail while at Carnegie Mellon. Riss approved the
result and his group used it for a time. Later, Randal was
traveling extensively and performing duties at Intel which required the
same kind of access, as Intel knew. Randal created a more secure gateway
for this purpose. That Intel knew and approved of Randal's use of gateway
programs for his own duties is shown by the evidence.
When two Intel employees were troubled by the
security of the gateway they asked Randal not to shut it down, but to
change it to run more securely. They checked Randal's changes and
passed off on them. This shows a proper concern about the
security implications of gateways, but it also shows that it was generally
recognized at Intel that Randal was allowed to and did run gateways.
In other words, this email gateway wasn't Randal's idea, and it had been
approved by an Intel security team (after the fact). The email gateway
charge was the only "plausible" count of the three. Technically, Intel did
have a policy against such gateways, though in light of the quote above
Schwartz had reason to believe his gateway was acceptable.
Intel strongly pushed for his prosecution. There is no evidence, however,
that before Schwartz's arrest Intel was in any way dissatisfied with his
job performance. Intel's Mark Morrissey insisted that "Randal did not have
permission for this activity," which was doubtless true narrowly
construed, but Schwartz had file-access permission to read the encrypted
passwords and general Intel permission to run work-related programs. In
Morrissey's report, it appears that Intel security people "found" evidence
of Schwartz's cracking, but Schwartz himself had never made any attempt to
During Schwartz's trial, it turned out that Intel VP Ed Masi had also
violated the Oregon Computer Law, regularly. He was not prosecuted.
At no point was any evidence presented of Schwartz's "criminal intent".
The appeals court (updated
link to the opinion) held that although "authorization" wasn't
spelled out in the law, Schwartz did things without authorization as
narrowly interpreted. The appellate court also upheld the trial court's
interpretation of "theft": taking anything without permission, even if the
thing is essentially useless or if the taking is implicitly authorized.
The appellate court also seemed to believe that Schwartz might have been
looking for flaws to take credit for them, and that such personal
aggrandizement was inappropriate:
Apparently, defendant believed that, if he
could show that SSD's security had gone downhill since he had left, he
could reestablish the respect he had lost when he left SSD.
But employees all the time look
for problems at work and try to fix them, hoping to receive workplace
recognition. In many other contexts, employees who make the extra effort
to "look for flaws" are considered exemplary.
Schwartz' conviction was expunged in 2007. Intel has never apologized.
What do you do if you are a system administrator, or a
database administrator, and your nontechnical supervisor wants the root
password? And you don't think they are technically competent to have it? The
case of Terry Childs addresses this.
Schwartz and Kutztown 13 cases have in common the idea that sometimes the
law makes rather mundane things into felonies. For Schwartz, it is very
clear that he had no "criminal" intent in the usual sense, although he did
"intend" to do the actions he was charged with.
The Schwartz, Childs and Amero cases have in common the idea that behavior
that some people might find well within the range of acceptable, while
others might find seriously criminal. These aren't like banking-industry
cases; none of the defendants was trying to push the envelope in terms of
what they could "get away with". All three felt they were "just doing their
Julie Amero case
On October 19, 2004, Amero was a substitute teacher (7th grade) at Kelly
Middle School, Connecticut. At some point early in the school day, the
teachers' desk computer started displaying an unstoppable stream of
pornographic web pages. Clicking the close button on one simply brought up
Amero had been explicitly told never to
disturb anything in the classroom, and in particular not to turn
the computer off. So she didn't. She had apparently no idea how to turn off
just the monitor. She spent much of her day at her desk, trying to fix the
problem by closing windows. She did not attempt to tape something over the
monitor, or cover the monitor with something, or turn the monitor face down.
Someone apparently decided that she was actively surfing porn. Within two
days, she was told she couldn't substitute at that school; she was arrested
Amero had complained to other teachers later that day. Why she didn't demand
that something be done during the lunch hour is not clear. Why she didn't
tape something over the screen is not clear. Amero claimed that two kids
used the computer before the start of class, at a hairstyles site, but
others claimed that could not have happened because
it was not allowed.
It later turned out that the school's content-filter subscription had
lapsed, and so the filter was out of date. Also, the computer had several
viruses or "spyware" programs installed. In retrospect, some sort of
In January 2007, she was convicted of impairing the morals of a child. This
was despite computer-forensic evidence that a hairstyles site triggered a
scripting attack that led to the Russian porn sites.
The prosecutor's closing arguments hinged on the idea that some of the links
in question had "turned red", thus "proving" that they had been clicked on
(ie deliberately by Amero) rather than having been activated via scripting.
This is false at several levels: link colors for followed links can be any
color at the discretion of the page, and if a page has been opened via a
script, links to it are indistinguishable from links that were clicked on.
In June 2007 Amero was granted a new trial, and in November 2008 she pleaded
guilty to a misdemeanor disorderly conduct charge and forfeited her teaching
Amero's failure to regard the computer problem as an emergency probably
contributed to her situation.
I discussed her case with a School of Education class once, and the
participants were unanimous in declaring that Amero was incredibly dense, at
best, and should not be in the classroom.
Chicagoan Jeremy Hammond was sentenced in November 2013 to ten years in
federal prison for a break-in at Stratfor, an intelligence-gathering
corporation, that involved the taking of a large cache of emails describing
the international and domestic spying operations carried out by Stratfor.
Hammond has described his actions here as "civil disobedience". Hammond's
record is pretty clearly about political protest.
He pled guilty to a single CFAA count, as part of a plea bargain.
Some had hoped Hammond would be sentenced to the 2-3 years of time already
served. However, Hammond had a previous conviction in 2006 for a hack into a
pro-Iraq-war group known as Protest Warrior, during which he downloaded
their entire database. It so happened that this database included 5000
credit-card numbers; Hammond used none of them. The prosecutor, however,
argued that Hammond "stole credit card numbers", and Hammond was sentenced
to two years in jail.
Andrew "Weev" Auernheimer
Andrew Auernheimer was sentenced in March 2013 to 41 months in prison for
downloading a list of email addresses from AT&T that were associated
with iPad accounts. Some of the email addresses were then published.
Here are some details from Orin Kerr, at http://www.volokh.com/2013/03/21/united-states-v-auernheimer-and-why-i-am-representing-auernheimer-pro-bono-on-appeal-before-the-third-circuit/.
Kerr has agreed to defend Auernheimer pro bono.
The issue was with a particular iPad settings option (Settings ->
Cellular Data -> View Account). When opened, this settings applet made
an http GET request to the AT&T server, attaching the iPad's ICC-ID, a
kind of "serial" number associated with the iPad's SIM card. AT&T
would then return user information corresponding to that ICC-ID, as
obtained at the time the iPad was registered. The settings applet then
displayed this information, along with an empty password field; users were
expected to type the password to log in. The settings applet did not
resemble a browser page, other than by making an http request.
Cookies were not used.
The underlying http GET request could be sent by an ordinary browser, as
well, and the AT&T server would not know the difference. An ordinary
browser would, however, not be configured to automatically look up the
device ICC-ID; that would have to be entered manually as one of the option
fields in the GET request.
Auernheimer and his colleague Daniel Spitler figured out that the
applet's queries were ordinary GET requests, and that if you tried a
random ICC-ID number, and it happened to match someone's real serial
number, AT&T would serve up that someone's real email address. The
is too long for this to work (22 digits), but most of the fields would be
known; only the "individual account identification number" would need to
be guessed, and these were apparently allocated sequentially. (There was
also a check digit.)
Further information is at gizmodo.com/the-little-feature-that-led-to-at-ts-ipad-security-brea-5559686.
In Kerr's words:
AT&T decided to configure their
webservers to "pre load" those [iPad-user] e-mail addresses when it
recognized the registered iPads that visited its website. When an iPad
owner would visit the AT&T website, the browser would automatically
visit a specific URL associated with its own ID number; when that URL was
visited, the webserver would open a pop-up window that was
preloaded with the e-mail address associated with that iPad.
The basic idea was to make it easier for users to log in to AT&T's
website: The user's e-mail address would automatically appear in the
pop-up window, so users only needed to enter in their passwords to access
their account. But this practice effectively published the e-mail
addresses on the web. You just needed to visit the right
publicly-available URL to see a particular user's e-mail address.
[Codefendant Daniel] Spitler realized this, and he wrote a script to visit
AT&T's website with the different URLs and thereby collect lots of
different e-mail addresses of iPad owners. And they ended up collecting a
lot of e-mail addresses : around 114,000 different addresses : that they
then disclosed to a reporter. Importantly, however, only e-mail addresses
were obtained. No names or passwords were obtained, and no accounts were
This appears to be a massive mistake by ATT. Who should be punished?
When Kerr writes that "the browser would automatically visit a specific
URL associated with its own ID number", this was more accurately the
settings applet, acting as a browser-based application.
AT&T's mechanism was quite different from the common "preloaded login
id"; the latter is usually supplied by the client side, not the
server. The right way to do this would have been for the applet to record
the user-provided email (and password) the first time the user logged in,
and then offered the user the opportunity to reuse it on subsequent logins.
Auernheimer has argued that it was AT&T who "released" these email
addresses. Did they?
Auernheimer's defense team argued that all he did was "walk through an open
The federal government argued that Auernheimer was motivated by profit,
because he was a computer security consultant and therefore stood to benefit
financially from any increase in his reputation.
The feds have also argued that, because Auernheimer is a "jerk",
extraordinary sentencing is warranted. Some examples of Weev's alleged
jerkiness can be seen at http://grahamcluley.com/2013/07/eff-ipad-hacker/;
here is one exchange with a compatriot "Nstyr":
Nstyr: you DID call tech
Weev: totally but not really
Weev: i dont f****n care i hope they sue me
Weev finally got a break; he was released April 11, 2014, after serving
almost 13 months of his 41-month sentence.
But not because the court ruled that the CFAA was misapplied. The Third
Circuit ordered his release because he was tried in New Jersey, a thousand
miles from his home in Arkansas (and not near the allegedly hacked AT&T
servers, either; those were in Texas and Georgia).
If the feds were to seek a new trial in an appropriate jurisdiction, Weev might
be able to raise the no-double-jeopardy rule. Though he has stated he would
not, in order to force a trial on the merits of the CFAA itself.
But a week later the feds formally dropped the case; Weev will not face a
Did Weev "hack" AT&T, or did AT&T make a mistake?
How is Weev's "exploit" different from a buffer-overflow exploit? How is
Even RTM's sendmail "wiz" bug was supposed to require a password.
It's just that a configuration-loading problem meant that an empty password
would often work.
Did Weev attempt to bypass any access-control measures? Does it matter?
Weev has stated that he wants to start a new company looking for software
problems on Wall Street. When the company finds a software flaw, they will
announce it publicly, but first will short-sell that company's stock (that
is, they will borrow shares and sell them). When the company's stock falls
on the news, they will clean up.
The company is to be named TRO LLC.
Well before the AT&T hack, Weev doxxed Kathy Sierra. Sierra's article
for Wired about this is here.
Weev's justification is here.
Matthew Keys was a reporter at the Tribune-owned KTXL in Sacramento. He was
fired, but his system passwords were not disabled. He turned them over to
Anonymous with the instructions to "go f--k some s--t up". Anonymous changed
a few stories to clearly humorous versions. Keys himself had
nothing to do with any of it.
A slightly complicating issue is that Anonymous apparently obtained a
higher-privilege password. Keys has said he did not supply this.
Keys was convicted in 2015, and sentenced in April 2016 to two years in
Elect Chippy 1337 in 201x!
Summary of Crime
Sometimes there are profound misunderstandings as to what constitutes a
"crime". Is there any objective standard when it comes to hacking? Is
acquiring information that you were nominally "not supposed to have" a
Once upon a time, the doctrine of mens rea was crucial: to be
convicted of a crime, the prosecutor had to prove criminal intent.
Now, some feel many criminal prosecutions are over technicalities. Randal
Schwartz might have the best case here. But Aaron Schwarz's "criminal
intent" is pretty mysterious too; JSTOR simply did not include file-download
limits for internal MIT connections.
As a non-cyber example of (the lack of) mens rea, consider the
prosecution of Terry Dehko and daughter Sandy Thomas, who ran a grocery
store in Michigan. The feds charged them with money-laundering by
"structuring" their cash deposits to be just under the $10,000 reporting
Never mind that their insurance only covered cash losses less than $10,000.
There was zero evidence of any intention to deceive anyone.
The feds eventually (2013) backed down, and agreed to dismiss claims.
Computers and Ordinary Criminals
What if you committed an ordinary crime, rather than a computer crime? There
can still be computer-related problems.
First, many parole decisions are now made by computers, using opaque
machine-learning algorithms. Without access to the training data, the
fundamental fairness of the program simply cannot be assessed. And you don't
get access to the training data, because it is "proprietary".
Second, there are many software packages used at criminal trials that also
use opaque algorithms.
Jurisdictional issues apply to both criminal and civil law. Oddly,
criminal law is more ambiguous; we
start with civil law. For online shopping, one of the first questions is
where did the sale take place? Here are some legal theories that have been
applied (eg in the LICRA/Yahoo case):
- the "affects" test: the court decides that the remote action affects
its own local citizens. A passive website would count here.
- the "affects intentionally" test: the court decides that the source intended to have an effect on its
- the "targeting" test: the court feels that the action was actually directed at its local citizens,
with some level of intent.
- the "primarily affects" test: the court decides that the action's primary effect is on its local
- the plaintiff test: the affected party (buyer or the one defamed, for
example) lives in the local jurisdiction
- purposeful availment: by choosing to engage in local commerce, the
remote entity "purposefully avails" itself of the legal system of the
- contract: the remote site has a contract with parties in the local
The following are the traditional three rules for a US court deciding it
has "personal jurisdiction" in a lawsuit:
- Purposeful availment: did
defendant receive any benefit from the laws of the jurisdiction? If
you're in South Dakota and you sell to someone in California, the laws
of California would protect you if the buyer tried to cheat you.
Generally, this is held to be the case even if you require payment
upfront in all cases. The doctrine of purposeful availment means that,
in exchange here for the benefits to you of California's laws, you
submit to California's jurisdiction.
- Where the act was done.
- Whether the defendant has a reasonable expectation of being subject to
Jurisdiction and criminal cases
The 6th amendment to the constitution requires that
In all criminal prosecutions, the accused
shall enjoy the right to a speedy and public trial, by an impartial jury
of the state and district wherein the crime shall have been committed
But what state and district are involved if you do something allegedly
Venue is extremely important if
"community standards" are at stake. Even if they are not, an inconvenient
venue can be chosen by prosecutors to harass you or make your defense more
expensive; alternatively, a venue can be selected where longer sentences are
handed down or juries are less tolerant of social differences.
If you are selling something
illegal, the feds may prosecute you in any state in which the material could
be purchased. The Reagan administration did just that when attempting to
crack down on pornography in the 1980's, often filing parallel lawsuits all
over the country.
However, if you are just a buyer,
the legal principle is still muddled. Just where were you in cyberspace when
you were sitting in your living room buying tax-planning software? Delaware?
See Baase, §5.5.2.
For hacking, it in theory may matter where you were when you launched the
attack, but as most such acts are prosecuted under Federal law (eg the CFAA)
this does not matter quite so much as one might think.
Remember the case of Yahoo selling Nazi memorabilia in California, and being
convicted of that by a French court?
Should Onel de Guzman, the Phillipine national who allegedly wrote the
ILOVEYOU virus, be able to vacation in the US? Or should the US arrest him
if they ever have the chance?
Should the US have arrested Dmitry Sklyarov of the Russian firm Elcomsoft
because Elcomsoft sold an ebook-DRM-removal program in Russia?
(Note the US eventually agreed the answer was "no", and dropped the case.)
In 2006 the US signed the so-called "cybercrime treaty", to encourage
international cooperation in prosecuting computer crime. However, in an
important area the treaty completely lacked
the usual "dual-criminality" provision, that the action in question must be
a crime in both nations for the
treaty to apply. The consequence is that US ISPs may be required to assist
in foreign-government investigations of events that are not illegal under US
law, even when the events occurred within the US. Foreign governments may
ask for electronic seizures and searches (eg of email records), and ISPs
must cooperate promptly or face charges.
The treaty also not only permits but requires
the FBI to engage in warrantless wiretapping of Americans if a foreign
government claims that the wiretap is necessary for a cybercrime
investigation. It is unclear if this has ever actually been done, however.
In Baase §5.5.3, she speculates that the US may have agreed to this
no-dual-criminality wording in order to be able to extend the reach of its
own laws overseas.
There are often other loopholes under which foreign governments may turn
down extradition requests.
There is some speculation that China refused to extradite NSA leaker Edward
Snowden from Hong Kong not because of the "political arrest" exemption but
because Snowden had claimed that the NSA hacked Chinese sites.
In 2001 and 2002, Scottish programmer Gary McKinnon allegedly hacked a large
number of US military sites. He was indicted by a US grand jury in November
2002. The US has been trying to extradite him from the UK ever since, so far
In 2005 the UK established a new extradition treaty with the US, under which
the US was no longer required to supply "incontrovertible evidence".
There was a 2008 hearing in the UK House of Lords; one issue was the fact
that the US was trying to bargain with McKinnon, but reserved the right to
retract any of its promises. (This is standard in US plea-bargaining, but
not in the UK.) Apparently the House of Lords ended up dismissing this
concern, however. Of more significance may have been McKinnon's diagnosis of
Asperger's Syndrome, and also of possible suicidal ideation.
In October 2012, Home Secretary Theresa May denied extradition on
human-rights grounds relating to McKinnon's illnesses.
After careful consideration of all of the
relevant material, I have concluded that Mr McKinnon's extradition would
give rise to such a high risk of him ending his life that a decision to
extradite would be incompatible with Mr McKinnon's human rights.
McKinnon has repeatedly claimed that he was hacking the US sites to find
information about UFOs, antigravity, and free energy.
McKinnon also claimed he got in by finding accounts with blank passwords;
others have suggested that the extradition attempt was to punish him not for
damaging US systems but for embarrassing the US military for their weak
security. Some have argued the same issue applies with Edward Snowden.
O'Dwyer was the developer of TVShack.net,
a search engine for copyrighted content. The US began extradition
proceedings against him in May 2011, charging him with criminal copyright
infringement. O'Dwyer's legal team has argued that the US does not have
jurisdiction, and that he should be tried in the UK.
In November 2012 the US dropped the extradition proceedings, possibly as
part of a plea agreement in which O'Dwyer would travel to the US, plead
guilty to something, and pay a fine.
British citizen and CEO of BETonSPORTS.com (no longer online) David
Carruthers was arrested in Dallas in July 2006 when changing
planes, because in the US online betting is illegal. He was sentenced on
January 8, 2010 to 33 months in prison; apparently this does not
include the 3 years already served under house arrest.
He conducted all his BETonSPORTS business while in England, and was just
passing through the US when arrested. He was charged because some of
BETonSPORTS's customers were allegedly US citizens.
At BETonSPORTS.com you could bet on Manchester United Football Club and the
England Cricket Team, but also on the Detroit Lions and the New York Mets.
Facing a potential 20-year sentence, he finally entered a plea of guilty in
Carruthers is a major advocate of regulated
What else could have been done? The real issue with internet gambling is
that it so frequently involves gambling on credit.
(This would not be the case if customers sent in money in advance, but that
greatly complicates use of the sites by impulse gamblers.)
In March 2007, BETonSPORTS founder Gary Kaplan was arrested in the Dominican
Republic, and extradited to the US. Kaplan pled guilty in 2009 to various
In September 2006 Peter Dicks was arrested at Kennedy
International airport for his role with Sportingbet PIC, also based in the
UK. The warrant was issued by Louisiana, for violations of Louisiana state
law. As New York had no state laws against internet gambling, they ended up
dismissing the warrant three weeks later and Dicks departed.
And yet, in other contexts, the government seems completely uninterested in
online gambling. See http://www.bloomberg.com/features/2016-virtual-guns-counterstrike-gambling/.