AI and Human Rights: Criminal Justice System
Summary
Automated decision-making tools are used widely and opaquely both directly in the criminal justice system and in ways that directly feed the criminal justice cycle in the U.S. The map above is a necessarily incomplete representation of these tools. One of the largest reasons is a lack of transparency by design. Affected people are often unable to know what tools the jurisdictions they live in use because of trade secret carveouts in Open Government laws, as well as similar roadblocks in evidentiary and discovery rules. The explanations, documents, and chart showing what tools are used where below are a representation of the patchwork of jurisdictions that are occasionally forthright about the tools they use, the outputs of Open Government requests, and news items often exposing a problematic use of one of the tools.
The tools
Predictive Policing Tools include "any policing strategy or tactic that develops and uses information and advanced analysis to inform forward-thinking crime prevention," according to the National Institute of Justice. Predictive policing comes in two main forms: location-based and person-based. Location-based predictive policing works by identifying places of repeated property crime and trying to predict where they would occur next, while person-based predictive policing aims to pinpoint who might be committing a crime - trying to measure the risk that a given individual will commit crimes. Both are used in different jurisdictions, and use past policing data as the main driver of these predictions, necessarily creating a self-fulfilling prophecy of arresting resources. The Bureau of Justice Assistance has and continues to give grants to Police departments around the country to create and pilot these programs. However, two high profile systems in Chicago and Los Angeles have been shut down due to limited effectiveness and significant demonstrated bias.
Helpful terms to remember:
Automated decision-making tools are tools or systems that analyze data in order to aid in decision-making -- these can vary from simple algorithms to machine learning programs. Most of the systems discussed on this page are automated decision-making systems.
Algorithms are functions that analyze data using a defined set of instructions and yield an output based on the instructions.
Artificial Intelligence (AI) is a broad term used to describe computational systems used to automate or optimize decision-making processes based on a wide range of data inputs.
Machine learning AI is an increasingly common data analysis technique that uses an algorithm or system that can adapt over time based on its inputs and outputs.
Surveillance tools encompass a large swath of technologies and functions that can be used to watch, track, and store information about a person. This ranges from Ring doorbells, whose servicer has direct partnerships with law enforcement, to facial recognition systems at the border and in U.S. cities. Learn more about this topic and EPIC's work here.
Criminalizing algorithms include algorithms used in housing, credit determinations, healthcare, hiring, schooling, and more. Many of these have been shown to make recommendations and decisions that negatively affect marginalized communities, encoding systemic racism, and contribute to entry of the Criminal Justice system. All of the other tools discussed here are affected by the results and data points produced by these criminalizing algorithms.
Risk Assessment Tools are used in almost every state in the U.S. - and many use them pre-trial, although they exist at sentencing, in prison management, and for parole determinations. There are also specific risk assessment tools for different functions in the criminal justice system, such as domestic violence risk or juvenile justice risks, with the understanding that different factors are used in those contexts than in a general criminal risk or violent criminal risk of rearrest or re-offense.
Pretrial Risk Assessment tools are designed to attempt to predict future behavior by defendants and incarcerated persons, and quantify that risk. They use socioeconomic status, family background, neighborhood crime, employment status, and other factors to reach a supposed prediction of an individual's criminal risk, either on a scale from "low" to "high" or with specific percentages. Significant empirical research has shown disparate impacts of Risk Assessment Tools on criminal justice outcomes based on the race, ethnicity, and age of the accused. The concerns with the use of these tools don't stop there. The tools vary but estimate using "actuarial assessments" (1) the likelihood that the defendant will re-offend before trial ("recidivism risk") and (2) the likelihood the defendant will fail to appear at trial ("FTA"). These often proprietary techniques are used to set bail, determine sentences, and even contribute to determinations about guilt or innocence. Yet the inner workings of these tools are largely hidden from public view. As a result, two people accused of the same crime may receive sharply different bail or sentencing outcomes based on inputs that are beyond their control--but have no way of assessing or challenging the results.
As criminal justice algorithms have come into greater use at the federal and state levels, they have also come under greater scrutiny. Many criminal justice experts have denounced "risk assessment" tools as opaque, unreliable, and unconstitutional.
A 2016 investigation by ProPublica tested the COMPAS system adopted by the state of Florida using the same benchmark as COMPAS: a likelihood of re-offending in two years. ProPublica found that the formula was particularly likely to flag black defendants as future criminals, labeling them as such at almost twice the rate as white defendants. In addition, white defendants were labeled as low risk more often than black defendants. But the investigators also found that the scores were unreliable in forecasting violent crime: only 20 percent of the people predicted to commit violent crimes actually went on to do so. When considering a full range of crimes, including misdemeanors, the correlation was found to be higher but not exceedingly accurate. Sixty-one percent of the candidates deemed liked to reoffend were arrested for any subsequent crimes within two years. According to ProPublica, some miscalculations of risk stemmed from inaccurate inputs (for example, failing to include one's prison record from another state), while other results were attributed to the way factors are weighed (for example, someone who has molested a child may be categorized as low risk because he has a job, while someone who was convicted of public intoxication would be considered high risk because he is homeless).
COMPAS is one of the most widely used algorithms in the country. Northpointe published a validation study of the system in 2009, but it did not include an assessment of predictive accuracy by ethnicity. It referenced a study that had evaluated COMPAS' accuracy by ethnicity, which reported weaker accuracy for African-American men, but claimed the small sample size rendered it unreliable. Northpointe has not shared how its calculations are made but has stated that the basis of its future crime formula includes factors such as education levels and whether a defendant has a job. Many jurisdictions have adopted COMPAS, and other "risk assessment" methods generally, without first testing their validity.
Defense advocates are calling for more transparent methods because they are unable to challenge the validity of the results at sentencing hearings. Professor Danielle Citron argues that because the public has no opportunity to identify problems with troubled systems, it cannot present those complaints to government officials. In turn, government actors are unable to influence policy
Over the last several years, prominent groups such as Pretrial Justice Institute (PJI) strongly advocated for the introduction of these tools and the Public Safety Assessment among many other risk assessments has been adopted in nearly every state, up from only a handful in the beginning of the decade. However, in February 2020, PJI reversed this position, specifically stating that they "now see that pretrial risk assessment tools, designed to predict an individual’s appearance in court without a new arrest, can no longer be a part of our solution for building equitable pretrial justice systems." One week later, Public Safety Assessment, a widely used risk assessment developed by the Laura and John Arnold Foundation, released a statement in which they clarify that "implementing an assessment cannot and will not result in the pretrial justice goals we seek to achieve."
Transparency is seldom required with pre-trial risk assessments. One of the primary criticisms of these risk assessment tools is that they are proprietary tools, developed by technology companies that refuse to disclose the inner workings of the "black box." Trade secret and other IP protection defenses have been given to demands of the underlying logic of the systems. In March 2019, Idaho became the first state to enact a law specifically promoting transparency, accountability, and explainability in pre-trial risk assessment tools. Pre-trial risk assessments are algorithms that help inform sentencing and bail decisions for defendants. The law prevents a trade secrecy or IP defense, requires public availability of 'all documents, data, records, and information used by the builder to build or validate the pretrial risk assessment tool,' and empowers defendants to review all calculations and data that went into their risk score.For a deeper dive into Pre-Trial Risk Assessments, visit EPIC's report: "Liberty At Risk, Pre-trial Risk Assessment Tools in the U.S."
Risk Assessment Tools State-By-State
The following table is based on a survey of state practices by EPIC performed September 2019, updated February 2020 with Mississippi FOI Documents. The functions vary between pre-trial, sentencing, prison management, and parole. Most of these tools, including their existence, are largely opaque and change often.
* Bill enacted Mar. 2019: requires transparency, notification, and explainability.
**There is no official compendium of Risk Assessments used by states.
Abbreviations Key:
DV - Domestic Violence
COMPAS - Correctional Offender Management Profiling for Alternative Sanctions
PSA - Pretrial Safety Assessment
PTRA - Pretrial Risk Assessment Instrument
CPAT - Colorado Pretrial Assessment Tool
PRRS - Pretrial Release Risk Scale
DELPAT - Delaware Pretrial Assessment Tool
ODARA - Ontario Domestic Assault Risk Assessment Tool
MNPAT - Minnesota Pretrial Assessment Tool
ORAS - Ohio Risk Assessment System
LS/CMI - Level of Service/Case Management Inventory
PRAISTX - Pretrial Risk Assessment Information System
VPRAI - Virginia Pretrial Risk Assessment Instrument
IRAS - Indiana Risk Assessment System
EPIC's Interest
EPIC has a strong interest in open government. Public disclosure of this information improves government oversight and accountability. It also helps ensure that the public is fully informed about the activities of government. EPIC routinely files lawsuits to force disclose of agency records that impact critical privacy interests.
EPIC also has a strong interest in algorithmic transparency. Secrecy of the algorithms used to determine guilt or innocence undermines faith in the criminal justice system. In support of algorithmic transparency, EPIC submitted FOIA requests to six states to obtain the source code of "TrueAllele," a software product used in DNA forensic analysis. According to news reports, law enforcement officials use TrueAllele test results to establish guilt, but individuals accused of crimes are denied access to the source code that produces the results.
The Universal Guidelines for Artificial Intelligence, grounded in a human rights framework, set forth twelve principles that are intended to guide the design, development, and deployment of AI, and frameworks for policy and legislation. Broadly, the guidelines address the rights and obligations of: 1) fairness, accountability, and transparency; 2) autonomy and human determination; 3) data accuracy and quality; 4) safety and security; and 5) minimization of scope. These principles can also guide the use of algorithms in the pre-trial risk context.
The very first principle, transparency, is seldom required with pre-trial risk assessments. One of the primary criticisms of these risk assessment tools is that they are proprietary tools, developed by technology companies that refuse to disclose the inner workings of the “black box.” Trade secret and other IP protection defenses have been given to demands of the underlying logic of the systems. In March 2019, Idaho became the first state to enact a law specifically promoting transparency, accountability, and explainability in pre-trial risk assessment tools. Pre-trial risk assessments are algorithms that help inform sentencing and bail decisions for defendants. The law prevents a trade secrecy or IP defense, requires public availability of ‘all documents, data, records, and information used by the builder to build or validate the pretrial risk assessment tool,’ and empowers defendants to review all calculations and data that went into their risk score.
EPIC FOI Documents
EPIC obtained the following documents concerning criminal justice algorithms through state freedom of information requests.
- EPIC FOIA Request
- DC Pretrial Services Agency (PSA) Production Letter
- DC PSA Memo Explaining Changed Factors between 2015 and 2019
- Maxarth 2019 Validation Study
- Maxarth 2019 Validation Study: Predictive Bias Report
- Contract docs between PSA and Maxarth Part 1 of 2
- Contract docs between PSA and Maxarth Part 2 of 2
- E-Mails between Maxarth and PSA Part 1 of 8
- E-Mails between Maxarth and PSA Part 2 of 8
- E-Mails between Maxarth and PSA Part 3 of 8
- E-Mails between Maxarth and PSA Part 4 of 8
- E-Mails between Maxarth and PSA Part 5 of 8
- E-Mails between Maxarth and PSA Part 6 of 8
- E-Mails between Maxarth and PSA Part 7 of 8
- E-Mails between Maxarth and PSA Part 8 of 8
- 2013 PSA Risk Assessment Article
- EPIC Request
- Georgia Department of Community Supervision Unified (Parole and Probation) Risk Assessment Contract Information
- Georgia Department of Community Supervision Personalized Responses for Offender Adjustment and Community Transition (PROACT) Matrix
- Georgia Department of Community Supervision PROACT Matrix Policy
- EPIC FOIA Request
- Idaho Production Letter
- Idaho LSI-R Annotated Scoresheet
- Idaho LSI-R Training Manual
- Idaho LSI-R Scoring Guide
- Idaho LSI-R 2015 Validation
- Idaho LSI-R 2002 Validation
- Idaho Dept. of Corrections RFP for Motivational Interviewing Training
- Bid from Great Lakes Training
- Acceptance of Bid from Great Lakes Training
- Miscellaneous Idaho Contracting Documents:
- Missouri Sentencing Advisory Commission (MOSAC) Risk Score: Validation Study (published in 2009 MOSAC Biennial Report)
- Missouri Board of Probation and Parole Risk Assessment (validation studies, policies and procedures)
- EPIC FOIA Request
- Mississippi Department of Corrections Intervention and Risk Assessment Training (2014)
- Mississippi Department of Corrections Risk Assessment Pilot Training - CRJ (2016)
- Mississippi Department of Corrections Risk and Needs Assessment Scoring Guide - CRJ (2016)
- Case Management Training - CRJ (2017)
- Mississippi Department of Corrections Risk Assessment Training - CRJ (2017)
- Mississippi Department of Corrections Risk and Needs Assessment Reliability Toolkit
- Mississippi Department of Corrections NeedsQ Script
- Mississippi Department of Corrections Sample Scoring Sheets
- Mississippi Department of Corrections New Offender Assessment Policy
- EPIC FOIA Request
- Nebraska FOIA Response
- Nebraska Risk and Needs Assessment Administrative Regulation
- Nebraska Inmate Classification Administrative Regulation
- Nebraska Substance Use Disorder Treatment Policy
- E-Mails between Nebraska Corrections Inspector General and STRONG-R Developer
- E-Mails between Nebraska Corrections Inspector General and Executive Officer of the Department of Correctional Services
- Nebraska Department of Correctional Services Request for Proposal ("RFP") Regarding Risk and Needs Assessments
- 1st Addendum to RFP
- 2nd Addendum to RFP
- 2nd Addendum to RFP
- Responsive Bid from Vant4ge d/b/a Allvest d/b/a Assessments.com
- Vant4ge Contract (2015-2018)
- Vant4ge Contract Renewal (2019-2022)
- Multi Health Systems Bid to RFP
- Comparative Analysis of West Virginia LS/CMI and U.S. Risk Scores by the Office of Research and Strategic Planning
- Northpointe Validation Studies (1/3)
- Northpointe Validation Studies (2/3)
- Northpointe Validation Studies (3/3)
- EPIC FOIA Request
- New Hampshire FOIA Response
- New Hampshire Department of Corrections Presentence Investigation (PSI) Report and Policies
- New Hampshire DOC PSI Policies
- Contract between New Hampsire DOC and University of Cincinnati
- University of Cincinatti Insurance Information for NH DOC Contract
- Vermont Attorney General Pretrial Manual
- Vermont Attorney General ORAS-PAT Guide
- Vermont Attorney General Pretrial Recommendation Sheet
- FY 2010 Northpointe Contract
- FY 2010 Northpointe Contract Addendum
- FY 2013 Northpointe Contract Addendum
- FY 2014 Northpointe Contract
- FY 2014 Northpointe Services Agreement
- FY 2015 Northpoint Contract
- FY 2016 Northpointe Contract
- 2007 COMPAS Validation Study: First Annual Report
- 2009 Evaluating the Predictive Validity of the COMPAS Risk and Needs Assessment System
- 2010 COMPAS Scales and Risk Models Validity and Reliability
- 2012 New York State COMPAS-Probation Risk and Need Assessment Study
- 2013 Predictive Validity of the COMPAS Reentry Risk Scales
- 2013 Summary Statistics Reentry Sample and COMPAS Norm Groups
- 2014 Automated COMPAS PSI Training Resource
- 2014 COMPAS Core Norms for Adult Institutions
- 2014 COMPAS Core Norms for Community Corrections
- 2014 COMPAS Reentry Norms for Women and Men
- 2015 COMPAS Practitioner's Guide
- 2015 Department of Corrections Memo re: COMPAS Use with PSI
- 2016 COMPAS Norming
- COMPAS Fact Sheet
- Comparison of CORE Norm Groups
- COMPAS Decile Cut Points Norming
- 2016 Department of Corrections Emails re: ProPublica Info Request
- 2016 Department of Corrections Emails re: NY Times Interview Request
- Evaluating the COMPAS Risk/Needs Assessment Tool
- Additional Resources on the Application of Risk/Needs Assessment at Sentencing
Resources
Legislation and Regulations
- Idaho Law (provides for transparency for risk assessment tools) (2019)
- AI Task Forces/Commissions:
- New York City (2017)
- New York State (2019)
- Alabama (2018)
- Vermont (2018)
- Massachusetts (Introduced in 2019) - EPIC Oct. 3, 2019 Testimony
- Sentencing Reform and Corrections Act of 2015 (a landmark sentencing reform bill which would have mandated the use of such assessments in federal prisons)
- Modal Penal Code: Sentencing ยง 6B.09 (recommending the implementation of recidivism based actuarial instruments in sentencing guidelines)
Government Studies
- Nathan James, Risk and Needs Assessment in the Criminal Justice System, Congressional Research Service (Oct. 15, 2015)
Notable Cases
- EPIC v. DOJ (Suit for records of Criminal Justice Algorithms by the Federal Government)
- EPIC v. CPB (Suit for documents of secret analytical tools to assign risk assessments to travelers)
- EPIC v. DHS (Suit for records of DHS program that predicts crime risk based on “physiological and behavioral signals”)
- United States v. Booker, 125 S. Ct. 738 (2005)
- Mistretta v. United States, 109 S. Ct. 647 (1989)
- State v. Loomis, No. 16-6387 (U.S.) (Wisconsin case in which defendant has petitioned U.S. Supreme Court for certiorari)
- Defendant's Brief (Dec. 4, 2015)
- State's Brief (Jan. 19, 2016)
- Defendant's Reply Brief (Feb. 4, 2016)
- Memorandum Opinion (July 13, 2016)
- Petition for Writ of Certiorari (Oct. 5, 2016)
- Brief of Respondent Wisconsin (Jan. 27, 2017)
- Iowa v. Guise, 921 N.W.2d 235 (2016).
- Doe v. Sex Offender Registry Board, 466 Mass. 594, 999 N.E.2d 478 (2013) (holding that the Sex Offender Registry board arbitrarily ignored scientific evidence that female offenders generally pose a much lower risk of re-offense; SORB was empowered to consider any useful information including scientific evidence introduced by offender in arriving at a classification decision, and authoritative evidence was introduced suggesting that establish "risk assessment" guidelines, developed from studies of male offenders, could not predict accurately the recidivism risk of a female offender, and that such risk could not be evaluated without examining the effect of gender)
- Malenchik v. State, No. 79A02-0902-CR-133 (Ind. Ct. App. June 5, 2009) (holding that it was not improper for the trial court to take into consideration a defendant’s LSI-R score at sentencing)
- In re CDK, 64 S.W.3d 679 (Tex. App. 2002) (holding that admitting an assessment report on a father’s sexual deviancy as expert witness testimony was an abuse of discretion because the plaintiff did not provide how the formulas were derived and whether they have ever been subjected to analysis or testing.
Academic Articles
- Ben Green, The False Promise of Risk Assessments: Episetemic Reform and the Limits of Fairness, Proceedings of the ACM Conference on Fairness, Accountability, and Transparency (FAT) (2020)
- Cynthia Rudin and Joanna Radin, Why are we using Black Box Models in AI When We Don't Need To? A Lesson from An Explainable AI Competition, Harvard Data Science Review (2020)
- Ari Ezra Waldman, Power, Process, and Automated Decision-Making, 1 Fordham Law Review Vol. 88, 2019 (Oct. 2, 2019).
- Elizabeth Joh, Automated Seizures: Police Stops of Self-Driving Cars, N.Y.U. L. Rev. Online (2019).
- Elizabeth Joh, Artificial Intelligence and Policing: Hints in the Carpenter Decision, Ohio St. J. Cri L. _ (2019).
- Sarah L. Desmaris, Evan M. Lowder, Pretrial Risk Assessment Tools: A Primer for Judges, Prosecutors, and Defense Attorneys, Safety and Justice Challenge, 2019
- Kehl, Danielle, Priscilla Guo, and Samuel Kessler. 2017. Algorithms in the Criminal Justice System: Assessing the Use of Risk Assessments in Sentencing. Responsive Communities Initiative, Berkman Klein Center for Internet & Society, Harvard Law School.
- Elizabeth Joh, Policing and Artificial Intelligence: First questions, 41 Seattle U. L. Rev. 1139 (2018).
- Elizabeth Joh, Feeding the Machine: Policing, Crime Data, & Algorithms, 26 William & Mary Bill of Rights J. 287 (2017).
- Elizabeth Joh, The Undue Influence of Surveillance Technology Vendors on Policing, 92 N.Y.U. L. REV. ONLINE 101 (2017).
- Melissa Hamilton, Risk-Needs Assessment: Constitutional and Ethical Challenges, Am. Crim. L. Rev.(2016)
- Solon Barocas & Andrew D. Selbst, Big Data’s Disparate Impact, 104 Calif. Law Review 671 (2016)
- Nicholas Scurich & John Monahan, Evidence-Based Sentencing: Public Openness and Opposition to Using Gender, Age, and Race as Risk Factors for Recidivism, 40 Law & Human Behavior 36 (2016)
- Jennifer Skeem & Christopher Lowenkamp, Risk, Race, and Recidivism: Predictive Bias and Disparate Impact (March 7, 2016)
- Gregory Cui, Evidence-Based Sentencing and the Taint of Dangerousness, 125 Yale Law Journal Forum 315 (2016)
- John Monahan and Jennifer L. Skeem, Risk Assessment in Criminal Sentencing, Annual Rev. of Clinical Psychology (Sept. 17, 2015).
- Claire Botnick, Evidence-Based Practice and Sentencing in State Courts: A Critique of the Missouri System, 49 Washington University Journal of Law & Policy 159, 160 (2015)
- Dawinder S. Sidhu, Moneyball Sentencing, 56 Boston College Law Review 671 (2015)
- Jennifer E. Laurin, Gideon by the Numbers: The Emergence of Evidence-Based Practice in Indigent Defense, 12 Ohio State Journal of Criminal Law 325 (2015)
- Melissa Hamilton, Adventures in Risk: Predicting Violent and Sexual Recidivism in Sentencing Law, 47 Arizona State Law Journal 1 (2015)
- Melissa Hamilton, Back to the Future: The Influence of Criminal History on Risk Assessments, 20 Berkeley Journal of Criminal Law 75, 76 (2015)
- Melissa Hamilton, Risk-Needs Assessment: Constitutional and Ethical Challenges, 52 American Criminal Law Review, 231 (2015)
- Shaina D. Massie, Orange Is the New Equal Protection Violation: How Evidence-Based Sentencing Harms Male Offenders, 24 William & Mary Bill of Rights Journal 521 (2015)
- Bernard Harcourt, Risk as a Proxy for Race: The Dangers of Risk Assessment, 27 Federal Sentencing Reporter 237 (2015)
- Sonja Starr, The New Profiling: Why Punishing Based on Poverty and Identity is Unconstitutional and Wrong, 27 Federal Sentencing Reporter 229 (2015)
- Danielle Keats Citron & Frank Pasquale, The Scored Society: Due Process for Automated Predictions, 89 Wash. L. Rev. 1 (2014).
- John Monahan & Jennifer Skeem, Risk Redux: The Resurgence of Risk Assessment in Criminal Sanctioning, 26 Federal Sentencing Reporter 158 (2014)
- Sonja Starr, Evidence-Based Sentencing and the Scientific Rationalization of Discrimination, 60 Stanford Law Review 803 (2014),
- Mark Olver, Keira Stockdale & J.S. Wormith, Thirty Years of Research on the Level of Service Scales: A Meta-Analytic Examination of Predictive Accuracy and Sources of Variability, Psychological Assessment (2013)
- J.C. Oleson, Risk in Sentencing: Constitutionally Suspect Variables and Evidence-Based Sentencing, 64 SMU Law Review 1329 (2011)
- Frank Pasquale, Restoring Transparency to Automated Authority, 9 Journal on Telecommunications & High Technology Law 235 (2011).
- A. Michael Froomkin, Government Data Breaches, University of Miami Legal Studies Research Paper No. 2009-20
- Bernard E. Harcourt, Risk as a Proxy for Race, University of Chicago Public Law & Legal Theory Working Paper No. 323, 2010.
- Thomas H. Cohen, Christopher T. Lowenkamp, William E. Hicks, Revalidating the Federal Pretrial Risk Assessment Instrument (PTRA): A Research Summary, Probation and Pretrial Services Office of the Administrative Office of the U.S. Courts.
- Danielle Citron, Technological Due Process, 85 Washington University Law Review 125 (2008)
Other resources
- Mapping Pretrial InjusticeMedia Mobilizing Project and MediaJustice (Feb. 2020)
- Stanford Pretrial Risk Assessment Tools Factsheet Project
- EPIC AI Policy Sourcebook 2019 - the first reference book on AI Policy
- Danielle Citron, (Un)Fairness of Risk Scores in Criminal Sentencing, Forbes (July 13, 2016)
- Luis Daniel, Guest post: The dangers of evidence-based sentencing, MathBabe (Oct. 21, 2014)
Books
- Weapons of Math Destruction by Cathy O’Neil
- Black Box Society by Frank Pasquale
- Automating Inequality by Virginia Eubanks
- Algorithms of Oppression by Safiya Noble
- Artificial Unintelligence by Meredith Broussard
- Technically Wrong: Sexist Apps, Biased Algorithms, and Other Threats of Toxic Tech by Sara Wachter-Boettcher
Documents and Reports
- Pretrial Justice Institute (PJI) No longer recommend Risk Assessment Tools, February 7, 2020.
- Sample COMPAS risk assessment questionnaire - Wisconsin's 137 question risk assessment
- Sample sentencing reports judges receive that includes risk assessment results
- Jennifer Elek, Roger Warren & Pamela Casey, Using Risk and Needs Assessment Information at Sentencing: Observations from Ten Jurisdictions, National Center for State Courts’ Center for Sentencing Initiatives
- Tara Agense & Shelley Curran, The California Risk Assessment Pilot Project: The Use of Risk and Needs Assessment Information in Adult Felony Probation Sentencing and Violation Proceedings, Judicial Council of California Operations and Programs Division Criminal Justice Services (December 2015)
News
- Rachel Metz and Scottie Andrew, CNN, In California, voters must choose between cash bail and algorithms, Oct. 31, 2020
- Tom Simonite, WIRED, Algorithms Were Supposed to Fix the Bail System. They Haven't, Feb. 19, 2020
- Cade Metz and Adam Satariano, New York Times, How the Algorithms Running Your Life Are Biased, Feb. 06, 2020
- Andrew Van Dam, Washington Post, Algorithms were supposed to make Virginia judges fairer. What happened was far more complicated., Nov. 19, 2019
- Ali Ingersoll, Washington Post, How the Algorithms Running Your Life Are Biased, Sep. 9, 2019
- Karen Hao, MIT Technology Review, AI is sending people to jail - and getting it wrong, Jan. 21, 2019
- Derek Thompson, The Atlantic, Should We Be Afraid of AI in the Criminal Justice System?, June 20, 2019
- Marc Rotenberg, New York Times, Bias By Computer, Aug. 10, 2016.
- Megan Garber, When Algorithms Take the Stand, The Atlantic (June 30, 2016)
- John Naughton, Opinion, Even Algorithms Are Biased Against Black Men, Guardian (June 26, 2016)
- Mitch Smith, In Wisconsin, a Backlash Against Using Data to Foretell Defendants’ Futures, NY Times (June 22, 2016).
- Joe Palazzolo, Wisconsin Supreme Court to Rule on Predictive Algorithms Used in Sentencing, Wall St. J. (June 5, 2016)
- Julia Angwin, Jeff Larson, Surya Mattu & Lauren Kirchner, What Algorithmic Injustice Looks Like in Real Life, Pro Publica, May 25, 2016
- Julia Angwin, Jeff Larson, Surya Mattu & Lauren Kirchner, Machine Bias, May 23, 2016
- Nicholas Diakopoulos, We Need to Know the Algorithms the Government Uses to Make Important Decisions About Us, The Conversation, May 23, 2016 --(also discusses state responses to FOIA requests-sent requests to every state)
- Nicholas Diakopoulos, How to Hold Governments Accountable for the Algorithms They Use, Slate, Feb. 11, 2016
- Jennifer Golbeck, How to Teach Yourself About Algorithms, Slate, Feb. 9, 2016
- Logan Koepke, Pennsylvania Will Vary Jail Terms for the Same Crime Based on Where You Live, EqualFuture, September 16, 2015
- Anna M. Barry-Jester, Ben Casselman, Dana Goldstein, The New Science of Sentencing, The Marshall Project, Aug. 4, 2015
- Anna M. Barry-Jester, Ben Casselman, Dana Goldstein, Should Prison Sentences Be Based on Crimes That Haven’t Been Committed Yet?, The Marshall Project, Aug. 4, 2015
- Eileen Sullivan & Ronnie Green, States Predict Inmates’ Future Crimes With Secretive Studies, Associated Press, Feb. 24, 2015
- Sonja Starr, Sentencing, by the Numbers, New York Times, Aug. 10, 2014
Share this page:
Subscribe to the EPIC Alert
The EPIC Alert is a biweekly newsletter highlighting emerging privacy issues.