Is Artificial Intelligence really a threat?

The threat of artificial intelligence rears its head as recent phenomena targeted toward courtroom proceedings. Its story, covered by DailyMail.com on September 22, 2017, boasts scientific design of “a machine-learning algorithm that can accurately predict over 70 percent of Supreme Court decisions. “

So how much of this technology is artificial intelligence and a threat to society? We can assess this based on previous courtroom decisions.  In 1977, the Supreme Court of the United States vacated and remanded decision in Gardner v. Florida.  “The Petitioner was denied due process of law when the death sentence was imposed by the Florida Supreme Court, at least in part, on the basis of information that he had no opportunity to deny or explain,” stated Justice Stevens. The information utilized in the case was an informational report which held information of which Gardner was unware.

On July 13, 2016, a similar report was utilized for sentencing of Eric M. Loomis and evaluated on appeal by the Wisconsin Supreme Court.  The report was comprised of a proprietary algorithm utilized by the COMPAS system which scales violence and recidivism risk principles as part of case triage.   In its opinion, the Wisconsin Supreme Court stated, “The court of appeals certified the specific question of whether the use of a COMPAS risk assessment at sentencing violates a defendant’s right to due process, either because the proprietary nature of COMPAS prevents defendants from challenging the COMPAS assessment’s scientific validity, or because COMPAS assessments take gender into account.”  The court determined, “if used properly, observing the limitations and cautions set forth herein, a circuit court’s consideration of a COMPAS risk assessment at sentencing does not violate a defendant’s right to due process,” further explaining its reasoning for decision was supported by other independent factors; thus, the risk assessment use was not a determinative.

Further highlights on the use of artificial intelligence were issued by the New York Times on May 1, 2017; in a conversation with Chief Justice G. Roberts Jr., a reporter asks, “Can you foresee a day when smart machines, driven with artificial intelligence will assist with courtroom fact-finding or, more controversially even, judicial decision-making.”  The article discusses in minor detail, the outcome of State of Wisconsin v. Eric M. Loomis (2016); finalizing statements of concern on secrecy and technology impact on the judicial society.

Based on the concept of use shown in State of Wisconsin v. Eric M. Loomis, the technology discussed, is not artificial intelligence in composition.  The COMPAS report requires human interaction from the offender and runs on a set of pre-determined risks which may or may not fit the offender taking the assessment. Today’s concept of Artificial Intelligence, implies “a machine that thinks for itself;”  this is not quite the same as machine-learning in scientific standards of robotics versus assessment tools.  Data is gathered through human interactive processes and stored in the machine’s memory.  Data is then run through human interaction and provides comparative results. Analysis tools like COMPAS have always been in circulation; prominently shown in discussion surrounding defense attorney Susan Flanders in Harrington v. State, No. 05-1351 (2007); State v. Harrington, 03-0824 (2004), a case which innovated criminal litigation’s use of the trial and mental competency assessment in its wake of judicial decision.

Leave a Reply

Your email address will not be published. Required fields are marked *