Prof. van Wynsberghe spricht über die Risiken von KI für Personen mit Behinderungen auf dem weltweit ersten “G7 Global Inclusion Summit”
Anfang September 2022 lud der Beauftragte der Bundesregierung für die Belange von Menschen mit Behinderungen zum ersten G7 Gipfeltreffen „Global Inclusion Summit“ in Berlin ein. Das Ziel des Gipfeltreffens bestand darin, gemeinsam für Menschen mit Behinderungen mehr Teilhabe in den G7 Staaten zu ermöglichen und globale Herausforderungen auch unter dem Gesichtspunkt der Inklusion anzugehen.
Prof. van Wynsberghe wurde eingeladen, einen Impulsvortrag über die Auswirkungen künstlicher Intelligenz auf Menschen mit Behinderungen zu halten. Sie machte darauf aufmerksam, dass Studien und Forschende im Bereich Behinderung in hohem Maße aus der Debatte um KI-Bias ausgeschlossen sind, und das obwohl die Nutzung von KI häufig die Marginalisierung von Personen mit Behinderungen verschärft.
Der gesamte Text ihres Vortrages ist unten im Original aufgeführt:
Disrupting the Injustice of AI
Prof. Dr. Aimee van Wynsberghe
Patterns of cultural stereotypes, discrimination, and marginalization of individuals and groups are found in the historical data used to train Artificial Intelligence (AI). Consequently, when AI is used in society it reinforces, and in some cases, exacerbates certain forms of discrimination. Although the field of AI ethics places the discussion on biases, stereotyping and discrimination in AI as paramount, disability studies and scholars are largely excluded from the AI bias debate. The result of this is a systemic continuation of the historical marginalization of persons with disabilities through the development and use of AI.
There are many concerns to be addressed in the context of ‘AI and Disability’. First, the design of AI and its representation of persons with a disability as being ‘other than normal’, an ‘outlier’. In this sense, AI works to reproduce and enforce certain norms in society that marginalize persons with disabilities. Studies show to date that individuals who don’t ‘fit the norm’ are less likely to be perceived by AI systems, for example autonomous vehicles unable to recognize persons in wheelchairs may opt to run such a person over. Moreover, AI is also used in the recruitment process of large companies to determine which candidates will be a desirable employee. To do this the AI model is trained to examine speech patterns, tone of voice, and facial movements among other indicators. For those individuals that fall outside the majority, aka. ‘outliers’, such tools discriminate against those with disabilities affecting facial expression, tone of voice etc (ex. Disabilities such as deafness, blindness, speech disorders, and/or surviving a stroke).
One of the main issues to be addressed is the difficulty of defining disability; disability is more often about how society responds to impairments rather than one being physically or mentally impaired. Those who are deaf, for example, do not consider themselves disabled; rather, they consider themselves to be part of a community of persons with differing linguistic abilities. Equally important, the label of disability is dynamic, constantly shifting. Consider that at one point in time homosexually was considered a disability and even listed as such in the DSM. One must also question how the concept of disability is currently changing in response to the COVID pandemic; those physically suffering from long covid or those suffering psychological distress and fear of being out in public leading to anxiety disorders and agoraphobia. How will AI models adapt to such indicators over time?
The above concerns relate to how AI is made and used with assumptions of disability at play; however, there are additional concerns about the creation of disability as a consequence of using AI. Consider AI used to track and monitor worker performance in Amazon warehouses. To optimize efficiency and financial gain, Amazon workers are pushed to extremes leading to chronic injuries and psychological stress.
In short, there are a myriad of concerns related to the design, development and use of AI and its relation to the complex discussion and concept of disability. It is not enough to add more data to training sets; rather, AI used in society must be understood as a social experiment for which metrics must be measured and ethical guardrails put in place. Any AI used in practice ought to be implemented with processes to protect those of historically marginalized groups. It is time now to actively disrupt the AI space and the consequent treatment of persons with disability who have been neglected access to power and opportunity.