AI and Digital Discrimination

Press/Media: Article/Feature

Description

While automated decision-making, with its algorithms often described as AI, serves as a catalyst for efficiency, it can also have potentially harmful consequences. Typically viewed as innocuous and fair, these decision-making algorithms may cause and even amplify biases, creating or perpetuating structural inequalities in society.

The phenomenon of digital discrimination due to AI bias has become so prevalent that a study published by the U.S. Department of Commerce found that people of color are more likely than whites to be misidentified by AI-based facial recognition technology. In fact, the list of cases involving AI bias has continued to grow over the last several years, extending to increasingly complex processes, with grave consequences.

For example, with regard to police work, there have been numerous wrongful arrests resulting from false hits by facial recognition software. In healthcare, an AI algorithm that was supposed to identify high-risk, chronically ill patients and assign them additional primary care visits wound up favoring white patients over Black patients. Using cost as a proxy in the algorithm, however, often ended up causing a racial bias due to an inherent difference in healthcare needs between Blacks and whites, even when both groups spent comparable amounts.

These anecdotes illustrate the unintended biases that could result from automated decision-making systems commonly used in such varied fields as marketing, insurance, healthcare, law, and finance. With a multidisciplinary approach and a coordinated effort, however, there are solutions to mitigate such biases—an effort to which management accountants and other financial professionals can contribute.

Period1 Dec 2022

Media contributions

1

Media contributions

  • TitleAI and DIgital Discrimination
    Degree of recognitionInternational
    Media name/outletStrategic FInance
    Media typeWeb
    Country/TerritoryUnited States of America
    Date1/12/22
    DescriptionWhile automated decision making, with its algorithms often described as AI, serves as a catalyst for efficiency, it can also have potentially harmful consequences. Typically viewed as innocuous and fair, these decision-making algorithms may cause and even amplify biases, creating or perpetuating structural inequalities in society.

    The phenomenon of digital discrimination due to AI bias has become so prevalent that a study published by the U.S. Department of Commerce found that people of color are more likely than whites to be misidentified by AI-based facial recognition technology. In fact, the list of cases involving AI bias has continued to grow over the last several years, extending to increasingly complex processes, with grave consequences.

    For example, with regard to police work, there have been numerous wrongful arrests resulting from false hits by facial recognition software. In healthcare, an AI algorithm that was supposed to identify high-risk, chronically ill patients and assign them additional primary care visits wound up favoring white patients over Black patients. Using cost as a proxy in the algorithm, however, often ended up causing a racial bias due to an inherent difference in healthcare needs between Blacks and whites, even when both groups spent comparable amounts.

    These anecdotes illustrate the unintended biases that could result from automated decision-making systems commonly used in such varied fields as marketing, insurance, healthcare, law, and finance. With a multidisciplinary approach and a coordinated effort, however, there are solutions to mitigate such biases—an effort to which management accountants and other financial professionals can contribute.
    URLhttps://sfmagazine.com/post-entry/december-2022-ai-and-digital-discrimination/
    PersonsArif Perdana, Eric W. Lee