Beyond State v Loomis: artificial intelligence, government algorithmization and accountability

Han-Wei Liu, Ching-Fu Lin, Yu-Jie Chen

Research output: Contribution to journalArticleResearchpeer-review

Abstract

Developments in data analytics, computational power and machine learning techniques have driven all branches of the government to outsource authority to machines in performing public functions—social welfare, law enforcement and, most importantly, courts. Complex statistical algorithms and artificial intelligence (AI) tools are being used to automate decision-making and are having a significant impact on individuals’ rights and obligations. Controversies have emerged regarding the opaque nature of such schemes, the unintentional bias against and harm to under-represented populations, and the broader legal, social and ethical ramifications. State v Loomis, a recent case in the USA, well demonstrates how unrestrained and unchecked outsourcing of public power to machines may undermine human rights and the rule of law. With a close examination of the case, this article unpacks the issues of the ‘legal black box’ and the ‘technical black box’ to identify the risks posed by rampant ‘algorithmization’ of government functions to due process, equal protection and transparency. We further assess some important governance proposals and suggest ways for improving the accountability of AI-facilitated decisions. As AI systems are commonly employed in consequential settings across jurisdictions, technologically informed governance models are needed to locate optimal institutional designs that strike a balance between the benefits and costs of algorithmization.
Original languageEnglish
Number of pages20
JournalInternational Journal of Law and Information Technology
DOIs
Publication statusAccepted/In press - 12 Feb 2019

Cite this

@article{d6e58de016a94456930ccedc618cf054,
title = "Beyond State v Loomis: artificial intelligence, government algorithmization and accountability",
abstract = "Developments in data analytics, computational power and machine learning techniques have driven all branches of the government to outsource authority to machines in performing public functions—social welfare, law enforcement and, most importantly, courts. Complex statistical algorithms and artificial intelligence (AI) tools are being used to automate decision-making and are having a significant impact on individuals’ rights and obligations. Controversies have emerged regarding the opaque nature of such schemes, the unintentional bias against and harm to under-represented populations, and the broader legal, social and ethical ramifications. State v Loomis, a recent case in the USA, well demonstrates how unrestrained and unchecked outsourcing of public power to machines may undermine human rights and the rule of law. With a close examination of the case, this article unpacks the issues of the ‘legal black box’ and the ‘technical black box’ to identify the risks posed by rampant ‘algorithmization’ of government functions to due process, equal protection and transparency. We further assess some important governance proposals and suggest ways for improving the accountability of AI-facilitated decisions. As AI systems are commonly employed in consequential settings across jurisdictions, technologically informed governance models are needed to locate optimal institutional designs that strike a balance between the benefits and costs of algorithmization.",
author = "Han-Wei Liu and Ching-Fu Lin and Yu-Jie Chen",
year = "2019",
month = "2",
day = "12",
doi = "10.1093/ijlit/eaz001",
language = "English",
journal = "International Journal of Law and Information Technology",
issn = "0967-0769",
publisher = "Oxford University Press",

}

Beyond State v Loomis : artificial intelligence, government algorithmization and accountability. / Liu, Han-Wei; Lin, Ching-Fu; Chen, Yu-Jie.

In: International Journal of Law and Information Technology, 12.02.2019.

Research output: Contribution to journalArticleResearchpeer-review

TY - JOUR

T1 - Beyond State v Loomis

T2 - artificial intelligence, government algorithmization and accountability

AU - Liu, Han-Wei

AU - Lin, Ching-Fu

AU - Chen, Yu-Jie

PY - 2019/2/12

Y1 - 2019/2/12

N2 - Developments in data analytics, computational power and machine learning techniques have driven all branches of the government to outsource authority to machines in performing public functions—social welfare, law enforcement and, most importantly, courts. Complex statistical algorithms and artificial intelligence (AI) tools are being used to automate decision-making and are having a significant impact on individuals’ rights and obligations. Controversies have emerged regarding the opaque nature of such schemes, the unintentional bias against and harm to under-represented populations, and the broader legal, social and ethical ramifications. State v Loomis, a recent case in the USA, well demonstrates how unrestrained and unchecked outsourcing of public power to machines may undermine human rights and the rule of law. With a close examination of the case, this article unpacks the issues of the ‘legal black box’ and the ‘technical black box’ to identify the risks posed by rampant ‘algorithmization’ of government functions to due process, equal protection and transparency. We further assess some important governance proposals and suggest ways for improving the accountability of AI-facilitated decisions. As AI systems are commonly employed in consequential settings across jurisdictions, technologically informed governance models are needed to locate optimal institutional designs that strike a balance between the benefits and costs of algorithmization.

AB - Developments in data analytics, computational power and machine learning techniques have driven all branches of the government to outsource authority to machines in performing public functions—social welfare, law enforcement and, most importantly, courts. Complex statistical algorithms and artificial intelligence (AI) tools are being used to automate decision-making and are having a significant impact on individuals’ rights and obligations. Controversies have emerged regarding the opaque nature of such schemes, the unintentional bias against and harm to under-represented populations, and the broader legal, social and ethical ramifications. State v Loomis, a recent case in the USA, well demonstrates how unrestrained and unchecked outsourcing of public power to machines may undermine human rights and the rule of law. With a close examination of the case, this article unpacks the issues of the ‘legal black box’ and the ‘technical black box’ to identify the risks posed by rampant ‘algorithmization’ of government functions to due process, equal protection and transparency. We further assess some important governance proposals and suggest ways for improving the accountability of AI-facilitated decisions. As AI systems are commonly employed in consequential settings across jurisdictions, technologically informed governance models are needed to locate optimal institutional designs that strike a balance between the benefits and costs of algorithmization.

U2 - 10.1093/ijlit/eaz001

DO - 10.1093/ijlit/eaz001

M3 - Article

JO - International Journal of Law and Information Technology

JF - International Journal of Law and Information Technology

SN - 0967-0769

ER -