“Don’t let me be misunderstood”

Critical AI literacy for the constructive use of AI technology

Authors

DOI:

https://doi.org/10.14512/tatup.30.3.44

Keywords:

deep automation bias, AI assessment, machine learning, uncertainty, awareness

Abstract

Research and development as well as societal debates on the risks of artificial intelligence (AI) often focus on crucial but impractical ethical issues or on technocratic approaches to managing societal and ethical risks with technology. To overcome this, more practical, problem-oriented analytical perspectives on the risks of AI are needed. This article proposes an approach that focuses on a meta-risk inherent in AI systems: deep automation bias. It is assumed that the mismatch between system behavior and user practice in specific application contexts due to AI‑based automation is a key trigger for bias and other societal risks. The article presents the main factors of (deep) automation bias and outlines a framework providing indicators for the detection of deep automation bias ultimately triggered by such a mismatch. This approach intends to strengthen problem awareness and critical AI literacy and thereby create some practial use.

References

Abeysooriya, Mandhri; Soria, Megan; Kasu, Mary; Ziemann, Mark (2021): Gene name errors. Lessons not learned. In: PLoS Computational Biology 17 (7), p. e1008984.

AlgorithmWatch (2019): Automating society. Taking stock of automated decision-making in the EU. Available online at https://www.algorithmwatch.org/automating-society, last accessed on 12. 10. 2021.

Borgesius, Frederik (2018): Discrimination, artificial intelligence, and algorithmic decision-making. Study for the Council of Europe. Strasbourg: DG of Democracy.

Buchanan, Richard (1992): Wicked problems in design thinking. In: Design Issues 8 (2), pp. 5–21.

Cabitza, Federico; Rasoini, Raffaele; Gensini, Gian (2017): Unintended consequences of machine learning in medicine. In: JAMA 318 (6), pp. 517–518.

Edwards, Lilian; Veale, Michael (2017): Slave to the algorithm? Why a ‘right to explanation’ is probably not the remedy you are looking for. In: Duke Law & Technology Review 16 (1), pp. 18–84.

Eid, Fatma-Elzahraa et al. (2021): Systematic auditing is essential to debiasing machine learning in biology. In: Communications Biology 4 (183), p. 1–9.

Floridi, Luciano et al. (2018): AI4People – an ethical framework for a good AI society. Opportunities, risks, principles, and recommendations. In: Minds & Machines 28, pp. 689–707.

Friedman, Bataya; Nissenbaum, Helen (1996): Bias in computer systems. In: ACM Transactions on Information Systems, 14 (3), pp. 330–347.

Gianfrancesco, Milena et al. (2018): Potential biases in machine learning algorithms using electronic health record data. In: JAMA Internal Medicine 178 (11), pp. 1544–1547.

Goddard, Kate; Roudsari, Abdul; Wyatt, Jeremy (2012): Automation bias. A systematic review of frequency, effect mediators, and mitigators. In: Journal of the American Medical Informatics Association 19 (1), pp. 121–127.

Goddard, Kate; Roudsari, Abdul; Wyatt, Jeremy (2014): Automation bias. Empirical results assessing influencing factors. In: International Journal of Medical Informatics 83 (5), pp. 368–375.

Hallensleben, Sebastian et al. (2020): From principles to practice. An interdisciplinary framework to operationalise AI ethics. Gütersloh: Bertelsmann Stiftung. Available online at https://www.bertelsmann-stiftung.de/fileadmin/files/BSt/Publikationen/GrauePublikationen/WKIO_2020_final.pdf, last accessed on 12. 10. 2021.

Hambling, David (2021): Drones may have attacked humans fully autonomously for the first time. In: New Scientist, 27. 05. 2021. Available online at https://www.newscientist.com/article/2278852-drones-may-have-attacked-humans-fully-autonomously-for-the-first-time/, last accessed on 12. 10. 2021.

Harlan, Elisa; Schnuck, Oliver (2021): Objective of biased? On the questionable use of artificial intelligence for job applications. Available online at https://web.br.de/interaktiv/ki-bewerbung/en/, last accessed on 12. 10. 2021.

Harwell, Drew (2019): A face-scanning algorithm increasingly decides whether you deserve the job. In: Washington Post, 06. 11. 2019. Available online at www.washingtonpost.com/technology/2019/10/22/ai-hiring-face-scanning-algorithm-increasingly-decides-whether-you-deserve-job/, last accessed on 12. 10. 2021.

HLEG – High-Level Expert Group on Artificial Intelligence (2019): Ethics guidelines for trustworthy AI. Available online at https://ec.europa.eu/newsroom/dae/document.cfm?doc_id=60419, last accessed on 28. 09. 2021.

Köchling, Alina; Wehner, Marius (2020): Discriminated by an algorithm. A systematic review of discrimination and fairness by algorithmic decision-making in the context of HR recruitment and HR development. In: Business Research 13 (3), pp. 795–848.

Lyell, David; Coiera, Enrico (2016): Automation bias and verification complexity. A systematic review. In: Journal of the American Medical Informatics Association 24 (2), pp. 424–431.

O’Neil, Cathy (2016): Weapons of math destruction. How big data increases inequality and threatens democracy. New York, NY: Crown.

Obermeyer, Ziad; Powers, Brian; Vogeli, Christine; Mullainathan, Sendhil (2019): Dissecting racial bias in an algorithm used to manage the health of populations. In: Science (336), pp. 447–453.

Parasuraman, Raja; Manzey, Dietrich (2010): Complacency and bias in human use of automation. An attentional integration. In: The Journal of the Human Factors and Ergonomics Society 52 (3), pp. 381–410.

Selbst, Andrew; Boyd, Danah; Friedler, Sorelle; Venkatasubramanian, Suresh; Vertesi, Janet (2019): Fairness and abstraction in sociotechnical systems. In: Association for Computing Machinery New York, NY (ed.): FAT* ’19, Proceedings of the conference on fairness, accountability, and transparency, pp. 59–68.

Simon, Judith; Wong, Pak-Hang; Rieder, Gernot (2020): Algorithmic bias and the value sensitive design approach. In: Internet Policy Review (9) 4, p. 1–16.

Strauß, Stefan (2018): From big data to deep learning. A leap towards strong AI or ‘intelligentia obscura’? In: Big Data and Cognitive Computing 2 (3), pp. 1–19.

Strauß, Stefan (2021): Deep automation bias. How to tackle a wicked problem of AI? In: Big Data and Cognitive Computing 5 (2), pp. 1–14.

Tsamados, Andreas et al. (2020): The ethics of algorithms. Key problems and solutions. Available online at SSRN’s eLibrary.

Tsoukiàs, Alexis (2020): Social responsibility of algorithms. An overview. Available online at https://arxiv.org/pdf/2012.03319.pdf, last accessed on 12. 10. 2021.

Wieringa, Maranke (2020): What to account for when accounting for algorithms. A systematic literature review on algorithmic accountability. In: Association for Computing Machinery New York, NY (ed.): FAT* ’19, Proceedings of the conference on fairness, accountability, and transparency, pp. 1–18.

Downloads

Published

20.12.2021

How to Cite

1.
Strauß S. “Don’t let me be misunderstood”: Critical AI literacy for the constructive use of AI technology. TATuP [Internet]. 2021 Dec. 20 [cited 2024 Mar. 28];30(3):44-9. Available from: https://www.tatup.de/index.php/tatup/article/view/6930