Beyond State v Loomis: artificial intelligence, government algorithmization and accountability

Han-Wei Liu, Ching-Fu Lin, Yu-Jie Chen

Research output: Contribution to journalArticleResearchpeer-review

62 Citations (Scopus)


Developments in data analytics, computational power and machine learning techniques have driven all branches of the government to outsource authority to machines in performing public functions-social welfare, law enforcement and, most importantly, courts. Complex statistical algorithms and artificial intelligence (AI) tools are being used to automate decision-making and are having a significant impact on individuals' rights and obligations. Controversies have emerged regarding the opaque nature of such schemes, the unintentional bias against and harm to under-represented populations, and the broader legal, social and ethical ramifications. State v Loomis, a recent case in the USA, well demonstrates how unrestrained and unchecked outsourcing of public power to machines may undermine human rights and the rule of law. With a close examination of the case, this article unpacks the issues of the 'legal black box' and the 'technical black box' to identify the risks posed by rampant 'algorithmization' of government functions to due process, equal protection and transparency. We further assess some important governance proposals and suggest ways for improving the accountability of AI-facilitated decisions. As AI systems are commonly employed in consequential settings across jurisdictions, technologically informed governance models are needed to locate optimal institutional designs that strike a balance between the benefits and costs of algorithmization.

Original languageEnglish
Article numbereaz001
Pages (from-to)122-141
Number of pages20
JournalInternational Journal of Law and Information Technology
Issue number2
Publication statusPublished - Jun 2019


  • accountability
  • algorithms
  • artificial intelligence
  • black box
  • human rights
  • rule of law
  • State v Loomis

Cite this