In recent years, several forms of previously accepted forensic science have come under scrutiny as unreliable or simply unsound.1 For instance, the science of arson investigation is among ‘the most prominent current example[s] of shifted science potentially calling into question hundreds or thousands of convictions that occurred over the past few decades’.2 The National Academies of Science (‘NAS’), in a 2009 report on forensic science, observed that there is a paucity of research supporting forensic arson investigation.3 The report also concluded that ‘many of the rules of thumb that are typically assumed to indicate that an accelerant was used (e.g., ‘alligatoring’ of wood, specific char patterns) have been shown not to be true’.4 Bite mark analysis raised similar concerns. The NAS report observed that ‘[a]lthough the majority of forensic odontologists are satisfied that bite marks can demonstrate sufficient detail for positive identification, no scientific studies support this assessment, and no large population studies have been conducted. In numerous instances, experts diverge widely in their evaluations of the same bite mark evidence’.5 More recently, the Texas Forensic Science Commission recommended a moratorium on bite mark evidence in criminal trials.6

The FBI itself has questioned or discontinued use of forensic methods including bullet lead examination7 and hair analysis8 in light of evidence that such methods are not probative or are simply unsound. With respect to hair analysis, ‘[t]he Justice Department and FBI…formally acknowledged that nearly every examiner in [the FBI Laboratory's microscopic hair comparison unit] gave flawed testimony in almost all trials in which they offered evidence against criminal defendants over more than a two-decade period before 2000’.9

Not even fingerprint analysis has escaped unscathed. Here, too, the NAS report noted a lack of scientific evidence supporting the accuracy of modern fingerprint analysis.10 The report also expressed concern about the dearth of information (or disclosure thereof) of error rates in such analysis, concluding that ‘[a]lthough there is limited information about the accuracy and reliability of friction ridge analyses, claims that these analyses have zero error rates are not scientifically plausible’.11

By comparison, DNA analysis has typically been viewed ‘like a crystal ball being handed to the criminal justice system’.12 The same NAS report that castigated so many traditional forensic sciences lauded DNA analysis as a forensic discipline with high ‘probative power’.13 Indeed, the report took DNA analysis as a point of comparison for other forensic sciences. Unlike toolmark and firearms analysis, for instance, ‘the protocols for DNA analysis do represent a precisely specified, and scientifically justified, series of steps that lead to results with well-characterized confidence limits, and that is the goal for all the methods of forensic science’.14 Forensic DNA analysis deserves much of the acclaim it has received. It is, as the NAS report observed, ‘scientifically sound’, and its development was ‘a fortuitous by-product of cutting-edge science’.15

But not all is well in the realm of forensic DNA analysis. In her new book, ‘Inside the Cell’, Erin Murphy complicates the laudatory tale of forensic DNA analysis, warning that ‘the same forces that corrupted other, older forensic methods still remain very much alive in the criminal justice system today’.16 Murphy examines how DNA is collected, analyzed, disclosed, and used in criminal investigations and in courts, exposing both the promise and documented pitfalls of DNA analysis in each arena, as well as how these matters are intertwined and interdependent. She exhorts readers that ‘DNA testing is neither savior nor cure-all: it is just another form of proof deserving of careful attention’.17 Ultimately, Murphy concludes that ‘we must attend to issues surrounding efficiency, accountability, accuracy, privacy, and equality’ in order to ‘fairly and accurately harness the power of forensic DNA testing’.18

Despite the power and breadth of Murphy's final analysis, there is one issue that unfortunately goes unnamed: transparency. Concerns about transparency surface at various points in Murphy's analysis, from cover-ups of lab errors,19 to non-disclosure of state or laboratory policies regarding when, how, and for how long DNA samples may be used,20 to refusals by developers of DNA testing software to share their source code with defendants.21 Yet, when Murphy's final chapter focuses on tying together the seemingly disparate strands of her analysis, transparency is not among those values she names as most important. To be sure, Murphy identifies transparency as important to achieving some of her named values, including accountability22 and accuracy.23 But transparency is more than merely instrumentally important; it is itself of sufficient significance that its absence from Murphy's ‘road to reform’ is striking. This is, however, a small quibble with an otherwise invaluable resource for those interested in the rapidly expanding world of forensic DNA analysis, and in the relationship between science and law enforcement more broadly.

INTERROGATING FORENSIC DNA

‘Inside the Cell’ divides its focus into four parts, the last of which sets out Murphy's proposal for ‘building a better DNA policy’.24 After a brief overview of DNA science and forensic typing, Murphy first examines how the ‘popular understanding of forensic DNA testing’—as a crisp, objective, near infallible science—fails to reflect the much murkier facts of real life.25 Experts in the field have long known that ‘interpretation of DNA from crime scenes can be incredibly complex’.26 But the various ways in which that complexity arises and may be compounded are growing. In one recent study, researchers asked 17 experienced DNA examiners to analyze a DNA mixture (a sample including DNA from more than one person) from an alleged gang rape; the examiners reached widely divergent conclusions about whether a particular suspect might have been involved in the rape.27 What is more, only one of the 17 examiners tendered a conclusion consistent with the analyst in the actual case in which the DNA mixture arose—a conclusion introduced at trial to convict a real-life suspect of the crime.28

These divergent conclusions are possible, even for a well-grounded scientific endeavor like forensic DNA typing, because the DNA collected at real crime scenes is often unlike the pristine DNA samples that we provide to our doctors or that known arrestees and convicted persons provide through a cheek swab. Unlike samples taken in a controlled environment, ‘crime scene testing…is like seeking results from [a] dirty Band-Aid—after it has been in the trash for two weeks’.29 Such samples ‘may have been exposed to light, heat, moisture, or chemicals that can compromise the ability to get results’.30 A sample may also contain DNA from an unknown number of persons,31 not all of whom may have had direct contact with the tested object at all or contact in proportion to the number of their cells on the tested object.32

Each of these complicating factors may be further compounded by recent extensions of forensic DNA analysis to so-called ‘touch’ DNA or ‘low copy number’ settings—those in which DNA analysis is performed on a very small number of cells or even a single cell.33 As Murphy explains, in these settings ‘the rules of thumb that help analysts interpret ordinary samples do not work as well…, and guided subjectivity risks becoming little more than self-justifying guesses’.34 Yet, of the few courts to adjudicate the admissibility of low copy number test results, most have deferred to prosecutors' assertions that such testing is just like ordinary forensic DNA analysis.35 Meanwhile, laboratories engaged in this envelope-pushing testing have frequently ‘refused to disclose the protocols and studies’ informing their analysis, claiming that this information is proprietary.36

These difficulties may be present even if the crime scene investigators, lab analysts, and everyone in between perform their jobs flawlessly. But this is an unrealistic assumption. As Murphy documents, there are now dozens of known (and how many more still unknown?) scandals involving sloppy, mistaken, or fraudulent casework.37 All the while, forensic analysts decline or refuse to disclose error rates when reporting DNA match probabilities in court, perpetuating the notion that forensic DNA analysis is free from either scientific or human error.38

Having enumerated how the processes for generating a DNA profile can mislead or give rise to error, Murphy casts her perspective more broadly, examining how the search for and report of forensic matches can similarly mislead jurors, judges, prosecutors, and even lab analysts. Different ways of calculating DNA match probabilities may give rise to widely divergent statistics and very different levels of certainty that a particular suspect or defendant ‘did it’.39 There are at least 10 different software packages on the market for decoding whether a particular suspect or defendant's DNA is present in a complex sample, like a mixture or low-quantity crime scene stain—each utilizing its own algorithm.40 Meanwhile, the FBI prohibited database research that threatened to complicate the one-in-several-trillion type of statistics on which prosecutors routinely rely in court.41 While differing statistical models and disagreements about whether a DNA match has been found would be inconvenient in a case in which DNA were just one piece of evidence against the defendant, these matters take on increased significance in the growing number of cases prosecuted based on a DNA match alone.42

Murphy also trains her attention on the new and expanding ways investigators are using DNA samples, profiles, and databases to identify suspects and bring prosecutions. Analysts have played ‘Go Fish’ with the national database, uploading variations of an unclear profile in hopes that one of them will yield a match.43 Labs have also erroneously uploaded DNA profiles from individuals who do not meet the requirements for database inclusion, including victims.44 And labs and new private vendors are now experimenting with ‘forensic DNA phenotyping’—creating a ‘genetic mugshot’ of the perpetrator of a crime based on the DNA he left behind at the crime scene.45 To date, forensic DNA typing has focused on portions of DNA not known to have any useful function (other than their use for forensic typing).46 DNA phenotyping relies on entirely distinct portions of the genome.47

The range of individuals subject to perpetual genetic surveillance has expanded in other ways as well, often with judicial blessing. States have consistently expanded the reach of offender DNA databases beyond those convicted of serious felonies.48 Today, nearly all states require those convicted of any felony to provide a DNA sample, most require a sample from those convicted of certain or any misdemeanors, and a growing number compel DNA samples from those merely arrested for a felony or misdemeanor.49 In Maryland v. King, the Supreme Court expressly sanctioned arrestee sampling.50

Courts have also largely held that the DNA we involuntarily and unknowingly leave behind in our ordinary activities—licking a stamp, sipping from a glass, visiting a doctor for a routine pap smear, discarding a pizza slice—has no protection from police collection and analysis for any purpose.51 Such surreptitious sampling gives rise to wholly unregulated sources of DNA for analysis.52 When the profiles from these kinds of samples do not meet the FBI's requirements for inclusion in the federally-regulated databases, labs often maintain them in off-books ‘rogue’ databases free from oversight and unburdened by the limited uses for which the federally regulated databases may be used.53

Laboratories have introduced and expanded the use of existing samples through familial searching as well. Familial searching, or the use more generally of partial matches, proceeds from a close-but-imperfect match between a crime scene sample and a known offender profile to launch an investigation into that known offender's close genetic relatives.54 Familial searching describes an intentional database search for these kinds of close matches; partial matching more broadly embraces instances in which such matches are fortuitously discovered during routine databases searches.55 Whether fortuitous or deliberate, partial matching takes advantage of the biological fact that close genetic relatives are more genetically similar than unrelated individuals to effectively sweep the relatives of known offenders into the database—even when those relatives have never been arrested or convicted of a offense enabling their direct inclusion in the database.56 Because the existing offender databases reflect the racial disparities of the criminal justice system writ large, moreover, the burdens of expanded DNA collection, testing, and use fall more heavily on racial minorities.57

Murphy finally brings these myriad strands of DNA difficulties together, identifying shared themes and interdependencies between lab, lawyer, and law. Murphy reminds readers, ‘it is a mistake to think that [forensic DNA testing's] underlying scientific legitimacy will, on its own, ensure the integrity of DNA evidence’.58 Murphy calls for ‘broad systemic reform’, attending to ‘issues surrounding efficiency, accountability, accuracy, privacy, and equality’, in order to ‘fairly and accurately harness the power of forensic DNA testing’.59

THE VALUE OF TRANSPARENCY

While Murphy does tremendous work in bringing together multiple strands of analysis of the scientific and legal challenges of forensic DNA testing, one issue crucial to the fair and accurate use of forensic DNA testing remains underdeveloped: transparency. The need for greater transparency is one that arises at numerous points in Murphy's own analysis. Indeed, the disjunction between the perception of forensic DNA analysis as a pristine science and the reality of the difficulties associated with identifying DNA profiles from samples more ‘like dirty Band-Aids’ is itself a reflection of a lack of transparency about the realities of forensic DNA typing.60 That the former is a perception shared by the public,61 courts,62 and even the National Academies of Science63 is a testament to how far from transparent information about the true difficulties of DNA analysis has become. The refusal of laboratories to provide, and of courts to demand, evidence regarding error rates in DNA analysis both compounds this disjunction and is, similarly, evidence of a lack of transparency about how such analysis operates in practice.64

Inadequate oversight of quality control within forensics labs exacerbates these transparency gaps. As Murphy documents, ‘nearly every major DNA unit—from the most applauded and sophisticated to the most amateur and haphazard, has endured a scandal of some kind’.65 Yet, the lab personnel at fault often escape censure for years at a time and laboratories are rarely disciplined.66 Many state lab officials sweep issues regarding quality control ‘to the side’.67 Oversight mechanisms are rarely rigorous, encourage underreporting of errors even when inspections occur, and hardly ever report misfeasance in a publicly accessible way.68 One of the major accrediting bodies for crime labs fails to include even basic data about compliance on its website.69

Nor do transparency gaps mask only human error in forensic DNA analysis. Several software architects marketing and deploying software packages for testing complex mixtures, low-quality samples, or low-quantity samples have ‘refuse[d] to release information about their source code or the precise manner in which their statistics are computed’.70 This is so despite the fact that these ‘different models produce different results, because they give weight to different factors’.71 In one case, the developer of one such software package, Mark Perlin, ‘admitted that no other scientists had seen his code or reviewed it directly’, all the while refusing to make the source code available and ‘defending it as a “trade secret”’.72 Yet, the judge deemed the software reliable and permitted its results to be admitted into evidence.73 To explain the software to the jury, prosecutors called Perlin as their ‘star witness’, and Perlin delivered self-serving testimony about the never-been-examined rigor of his software.74 The jury returned a conviction.75

Finally, transparency concerns arise regarding the composition and use of the forensic DNA databases. For one thing, a growing number of labs have implicitly expanded the reach of state forensic databases through the use of partial DNA matches to initiate investigation of the family members of a known offender. While a handful of states have publicly available policies in place governing when and how they pursue these kinds of matches, a greater number have policies permitting the use of such partial matches that are buried in laboratory manuals and were developed by lab personnel largely without oversight.76

More generally, data about how analysts use the local, state, and national DNA databases is difficult to determine in light of FBI rules limiting disclosure of information relating to such use.77 In many instances, those outside the lab and prosecutors office will never even learn whether a database search turned up more than one possible match. And while lab analysts working hand-in-glove with prosecutors sometimes ‘Go Fish’ in DNA databases, defense attorneys are routinely denied access to the federally-administered databases to search for other possible perpetrators to a crime.78

The existence of ‘rogue databases’, informal DNA databases maintained ‘outside of the FBI's centralized architecture’, is rarely even publicly disclosed.79 In these databases, analysts have stored DNA profiles from crime victims, individuals providing DNA samples in response to police request or to eliminate themselves as suspects, and individuals of police interest whose DNA has been surreptitiously collected from a stray object they have handled.80 Troublingly, rogue databases operate ‘beyond the reach of established laws that govern either the origin or quality of the profiles’ they contain.81 As Murphy recognizes, ‘the law effectively encourages police to act in the sneakiest ways possible, by according the least oversight and protection to DNA samples collected on the sly’.82

Failures of transparency thus infect every issue Murphy tackles in her book; yet Murphy declines to argue that transparency is necessary to ‘fairly and accurately harness the power of forensic DNA testing’.83 To be sure, Murphy links transparency to achieving some of her named values. Most prominently, under ‘accountability’, Murphy acknowledges that ‘[t]he move toward greater accountability can only succeed … if it occurs in tandem with a move toward greater transparency’.84 Here, Murphy argues that, in addition to better oversight mechanisms for laboratory quality control, information about the results of such oversight must be more publicly available.85 Murphy also links transparency to ‘accuracy’ in calling for broader ‘discovery entitlements’ for defense counsel86 and for ‘[a]lgorithmic transparency’— the disclosure of the source code used to calculate DNA match statistics.87 And under ‘efficiency’, Murphy calls for ‘better information about how our DNA databases are working’, identifying a range of questions about the composition and use of federally administered databases that would be helpful to answer.88 Yet, these discussions of transparency suggest that transparency is desirable only insofar as it serves and supports Murphy's named goals.

Casting transparency in a supporting role, however, misunderstands the important part that transparency itself can play in bringing about better policy and decision making. Scholars, jurists, and policymakers have long recognized that transparency is indispensable to good policy. Louis Brandeis, the author of the right of privacy, acknowledged that publicity—transparency—‘is justly commended as a remedy for social and industrial diseases. Sunlight is said to be the best of disinfectants’.89 Legislators have repeatedly mandated sunshine—openness and transparency—as tools to address social ills. Since 1976, Congress has required most agency meetings to be open to the public through the Government in the Sunshine Act.90 More recently, as part of the Affordable Care Act, Congress enacted the Physician Payments Sunshine Act, which requires certain drug and device manufacturers to disclose their financial relationships with physicians and teaching hospitals.91

Moreover, research supports the importance of transparency as such. In one study, researchers alternated between an image of flowers and an image of a pair of eyes adjacent to an ‘honesty box’ at a workplace beverage station.92 In such a setting, individuals have the option of depositing funds into the honesty box according to a price list, but there is no formal enforcement mechanism for payments. The researchers found that ‘[o]n average, people paid 2.76 times as much in the weeks with eyes … than with flowers. There was no evidence that image type affected consumption’.93 A subsequent study demonstrated that, when an image of eyes (as opposed to flowers) was present, individuals were less likely to litter in a cafeteria.94 In other words, subtle cues of transparency—that others might see what is otherwise unseen—are sufficient to induce more honest and socially cooperative behavior. Transparency may thus yield self-policing and better policies and practices.

A positive relationship between transparency and more thoughtful policy is also reinforced in at least one matter of forensic DNA policy: partial matching for familial investigation. In general, partial matching policies embody a startling lack of transparency.95 Among states that permit reporting of at least some partial matches, however, indicators of greater transparency correlate with policies that include more scientific and procedural steps prior to disclosure.96 The four states that have reported a partial match at least once in the absence of a written policy imposed few hurdles to disclosure of those matches.97 In states that have committed a policy to writing, meanwhile, those policies ‘appear to be not only easier to access in most cases, but also more specific in their instructions than their unwritten counterparts’.98 This is particularly so for states embracing both fortuitous and deliberate partial matching. The policies in these states are more often publicly accessible and more detailed—‘not only on the quality of the partial match, but also on the types of cases in which partial match information may be released’.99 Although correlation is not causation, this limited convergence between greater public availability and greater procedural safeguards suggests that transparency may yield greater deliberation in policymaking.100

This is not to say that transparency and disclosure should be without limits. Sometimes, transparency-obscuring mechanisms function both as features and as bugs. As Murphy recognizes, the decentralization of information about whose DNA is in the forensic DNA database is one such mechanism.101 Decentralization makes it more difficult to compile and report accurate statistics about the database population and how its profiles are being used.102 But decentralization has the virtue of also making it more difficult ‘to hack or tamper with the system’.103 Accordingly, while greater transparency should be a primary goal in the development of better DNA policy, that goal must be tempered by good sense and the other values such development seeks to enhance.

In sum, declining to name transparency as a key value in the final, policy-prescriptive chapter of ‘Inside the Cell'is unfortunate. But this is a minor flaw. After all, ‘Inside the Cell’ attends to matters of transparency throughout its pages. It does significant work to advance the conversation about appropriate forensic DNA policy. It is well-worth reading.

1
See Erin E. Murphy, Inside the Cell 10 (2015); see also Nat'l Res. Council, Comm. on Identifying the Needs of the Forensic Sci. Cmty., Strengthening Forensic Science in the United States: A Path Forward 8 (2009) [hereinafter NAS Report] (‘[T]here is a notable dearth of peer-reviewed, published studies establishing the scientific bases and validity of many forensic methods.’); Sabra Thomas, Addressing Wrongful Convictions: An Examination of Texas's New Junk Science Writ and Other Measures for Protecting the Innocent, 52 Hous. L. Rev. 1037, 1042 (2015) (identifying among ‘examples of junk science’, ‘dog scent lineups, bite mark comparisons, arson science, and hair and fiber analysis’).
2
See Caitlin Plummer & Imran Syed, ‘Shifted Science’ and Post-Conviction Relief, 8 Stan. J. Civ. Rts. & Civ. L. 259, 271 (2012).
3
NAS Report, supra note 1, at 173.
4
Id.
5
Id. at 176 (noting ‘inherent weaknesses involved in bite mark comparison’).
6
Erik Eckholm, Texas Panel Calls for an End to Criminal IDs via Bite Mark, New York Times, Feb. 13, 2016, at A10.
7
FBI Laboratory Announces Discontinuation of Bullet Lead Examinations, Fed. Bureau of Investigation, Sept. 1, 2005, https://www.fbi.gov/news/pressrel/press-releases/fbi-laboratory-announces-discontinuation-of-bullet-lead-examinations (accessed Mar. 24, 2016) (announcing that the FBI ‘will no longer conduct the examination of bullet lead’ in light of its conclusion that ‘neither scientists nor bullet manufacturers are able to definitively attest to the significance of an association made between bullets in the course of a bullet lead examination’).
8
See Spencer S. Hsu, FBI Admits Flaws in Hair Analysis Over Decades, Wash. Post, Apr. 18. 2015, https://www.washingtonpost.com/local/crime/fbi-overstated-forensic-hair-matches-in-nearly-all-criminal-trials-for-decades/2015/04/18/39c8d8c6-e515-11e4-b510-962fcfabc310_story.html? (accessed Mar. 24, 2016); see also NAS Report, supra note 1, at 160, 161 (‘No scientifically accepted statistics exist about the frequency with which particular characteristics of hair are distributed in the population. There appear to be no uniform standards on the number of features on which hairs must agree before an examiner may declare a “match.” … The categorization of hair features depends heavily on examiner proficiency and practical experience.’).
9
Hsu, supra note 7.
10
NAS Report, supra note 1, at 142, 145.
11
Id. at 142.
12
Mark A. Godsey & Marie Alou, She Blinded Me with Science: Wrongful Convictions and the ‘Reverse CSI-Effect’, 17 Tex. Wesleyan L. Rev. 481, 485 (2011).
13
NAS Report, supra note 1, at 133.
14
Id. at 155.
15
Id. at 133.
16
Murphy, supra note 1, at 311.
17
Id.
18
Id. at 266.
19
See infra notes 64–68 and accompanying text.
20
See infra notes 75, 78–81 and accompanying text.
21
See infra notes 69–74 and accompanying text.
22
Murphy, supra note 1, at 285, 286.
23
Id. at 299, 300.
24
Id. at 263.
25
Id. at ix.
26
Id. at 4.
27
Id. at 4, 5 [describing I.E. Dror & G. Hampikian, Subjectivity and Bias in Forensic DNA Mixture Interpretation, 51 Sci. & Just. 204 (2011)].
28
Murphy, supra note 1, at 5.
29
Id. at 19.
30
Id.
31
Id. at 23, 25.
32
Id. at 29, 30 (describing generally the issue of DNA transfer); see generally id. at 29, 47 (describing research relating to and difficulties arising from DNA transfer).
33
Id. at 75, 76.
34
Id. at 77.
35
Id. at 78.
36
Id. at 79.
37
Id. at 53, 73.
38
Id. at 122, 123.
39
Id. at 86 (random match probability), 92–4 (combined probability of inclusion, likelihood ratio), 106–8 (database match statistic), 113 (Balding-Donnelly approach).
40
Id. at 97, 98 (identifying software packages and describing how each utilizes a somewhat different algorithm, thus yielding different results depending on which software package is used).
41
Id. at 111, 112.
42
Id. at 110.
43
Id. at 144, 145.
44
Id. at 140.
45
Id. at 220.
46
Id. at 218.
47
Id. at 218, 220.
48
Id. at 156, 157.
49
Id.
50
133 S. Ct. 1958 (2013).
51
Murphy, supra note 1, at 170, 172.
52
Id. at 169, 171–2.
53
Id. at 170.
54
Id. at 191, 193; see also Natalie Ram, Fortuity and Forensic Familial Identification, 63 Stan. L. Rev. 751, 763–4 (2011) [hereinafter Ram, Fortuity]; Natalie Ram, DNA by the Entirety, 115 Colum. L. Rev. 873, 882 (2015) [hereinafter Ram, Entirety].
55
Ram, Fortuity, supra note 54, at 764, 766 n. 87; Ram, Entirety, supra note 54, at 919, 920.
56
Murphy, supra note 1, at 204; Ram, Fortuity, supra note 54, at 794; Ram, Entirety, supra note 54, at 927.
57
Murphy, supra note 1, at 256, 260.
58
Id. at 265.
59
Id. at 266.
60
Id. at 19.
61
See id. at 19; Godsey & Alou, supra note 11, at 485.
62
See Murphy, supra note 1, at 122 (‘[Courts] have accepted government assertions that DNA testing methods, and the manner of execution have a “zero error rate”’.) (footnotes omitted).
63
See NAS Report, supra note 1, at 133.
64
See Murphy, supra note 1, at 122, 123.
65
Id. at 53.
66
See id. at 50, 53 (relating events surrounding one fraudulent an incompetent lab analyst, whose employer covered up her shoddy work for years, and noting that this is but one such story of its kind), 62–3 (describing inadequate accreditation and oversight mechanisms that allow a laboratory to ‘simply hide its dysfunction for years’).
67
Id. at 55.
68
Id. at 58, 73.
69
Id. at 64.
70
Id. at 98.
71
Id.
72
Id. at 101.
73
Id.
74
Id.
75
Id.
76
See Ram, Fortuity, supra note 54, at 776, 778 and Fig. 3; see also Murphy, supra note 1, at 196, 198.
77
Murphy, supra note 1, at 146, 147.
78
Id. at 148.
79
Id. at 168.
80
Id. at 169, 170.
81
Id. at 168.
82
Id. at 170.
83
Id. at 266.
84
Id. at 285.
85
Id. at 285, 286
86
Id. at 295.
87
Id. at 299 (italics omitted).
88
Id. at 278, 281.
89
Louis D. Brandeis, Other People's Moneyand How the Bankers Use It 92 (1914). Thomas Jefferson similarly understood that watchful eyes could breed better action, writing, ‘Whenever you do a thing, act as if all the world were watching’. See Rikkilee Moser, Comment, As If All The World Were Watching: Why Today's Law Enforcement Needs To Be Wearing Body Cameras, 36 N. Ill. L. Rev. Online J. 1, 28 (2015) (quoting Thomas Jefferson, citation omitted).
90
Pub. L. No. 94–409, 90 Stat. 1241 (1976) (codified at 5 U.S.C. § 552b).
91
Pub. L. No. 111–148, § 6002, 124 Stat. 689 (2010) (codified at 42 U.S.C. § 1320a-7h).
92
See Melissa Bateson, Daniel Nettle & Gilbert Roberts, Cues of Being Watched Enhance Cooperation in a Real-World Setting, 2 Biology Letters 412, 412 (2006).
93
Id.
94
See Max Ernest-Jones, Daniel Nettle & Melissa Bateson, Effects of Eye Images on Everyday Cooperative Behavior: A Field Experiment, 32 Evolution & Hum. Biol. 172 (2011).
95
Ram, Fortuity, supra note 54, at 777.
96
Id. at 778, 782.
97
Id. at 782.
98
Id.
99
Id. at 783, 786.
100
See generally id. at 786, 787 (arguing that, although fortuitous and deliberate partial matching pose the same legal and ethical quandaries, many states may embrace the former while excluding the latter in order to avoid publicity and public process).
101
SeeMurphy, supra note 1, at 281.
102
Id.
103
Id.
This is an Open Access article distributed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivs licence (http://creativecommons.org/licenses/by-nc-nd/4.0/), which permits non-commercial reproduction and distribution of the work, in any medium, provided the original work is not altered or transformed in any way, and that the work is properly cited. For commercial re-use, please contact journals.permissions@oup.com.