One, none, one hundred thousand AIs

Book Review of Machines We Trust: Perspectives on Dependable AI

Authors

DOI:

https://doi.org/10.55613/jeet.v32i2.120

Keywords:

Artificial Intelligence, Trust, Social responsibility, Unintended consequences, Experiments

Abstract

Like many innovative technologies, AI possesses a transformational power: its implementation in society is not a neutral additive process, but it may alter in significant ways various social and cultural dynamics. Socio-ethical concerns led to the demand of designing AI devices that can be ‘trusted’. The recent publication of Machines we trust provides novel opportunities to discuss some socio-ethical issues arising from human-AI interactions. After defining the concepts of trust, trustworthiness, and reliability, and explaining in which sense it is possible to talk about ‘trustworthy AI’, I focus on two chapters of the volume that consider some concrete applications of AI. I conclude by suggesting that, instead of considering the different contributions to the volume in isolation with respect to one another, it may be illuminating to compare and contrast them. Such a way of reading the book leads us to question whether it is still possible to talk about trustworthy AI ‘in general’ or whether the discussion about the socio-ethical issues posed by AI should proceed in a piece-meal case-by-case
fashion.

References

Amigoni, F., Reggiani, M. and Schiaffonati, V. (2009) An insightful comparison between experiments in mobile robotics and in science. Autonomous Robots 27:313.

Amigoni, F. and Schiaffonati, V. (2018). Ethics for Robots as Experimental Technologies: Pairing Anticipation with Exploration to Evaluate the Social Impact of Robotics, IEEE Robotics & Automation Magazine 25:30–36.

Baier, A. (1986) Trust and antitrust. Ethics 96:231–260.

Bauman, Z. (2000) Liquid Modernity. Cambridge: Polity Press.

Douglas, H. (2009) Reintroducing prediction to explanation. Philosophy of Science 76:444–463.

Douglas, H. and Magnus, P. (2013) State of the field: why novel prediction matters. Studies in History and Philosophy of Science 44:580–589.

European Commission (2019) Ethics Guidelines for Trustworthy AI. https://ec.europa.eu/futurium/en/ai-alliance-consultation Accessed on 26 October 2022.

Heidegger, M. (1954 [1977]). The Question concerning Technology (original title: ‘Die Frage nach der Technik’). In Heidegger, The Question concerning Technology and other essays (William Lovitt, Trans.). New York: Harper & Row, 1977, pp. 3–35.

Hempel, C. and Oppenheim, P. (1948 [1965]) Studies in the Logic of Explanation, Philosophy of Science 15:135–175.

Holton, R. (1994) Deciding to trust, coming to believe. Australasian Journal of Philosophy 72:63–76.

McLeod, C. (2015) Trust. In Edward N. Zalta (Ed.) The Stanford Encyclopedia of Philosophy. Available online at: https://plato.stanford.edu/archives/fall2015/entries/trust/ Accessed on 26 October 2022.

Schiaffonati, V. (2022) Explorative Experiments: a paradigm shift to deal with severe uncertainty in autonomous robotics. Perspectives on Science 30:284–304.

van der Bug, S. (2009) Taking the ‘soft impacts’ of technology into account: broadening the discourse in research practice. Social Epistemology 23:301—16.

Wright, S. (2010) Trust and trustworthiness. Philosophia 38:615–627.

Downloads

Published

2022-12-13

How to Cite

One, none, one hundred thousand AIs: Book Review of Machines We Trust: Perspectives on Dependable AI. (2022). Journal of Ethics and Emerging Technologies, 32(2), 1-8. https://doi.org/10.55613/jeet.v32i2.120

Similar Articles

1-10 of 40

You may also start an advanced similarity search for this article.