Trust in Computational Intelligence systems: A case study in public perceptions

Document Type

Journal Article

Publication Title

2019 IEEE Congress on Evolutionary Computation (CEC)

Publisher

IEEE

School

School of Business and Law

RAS ID

28929

Comments

Crockett, K., Goltz, S., Garratt, M., & Latham, A. (2019). Trust in Computational Intelligence Systems: A Case Study in Public Perceptions. In 2019 IEEE Congress on Evolutionary Computation (CEC) (pp. 3227-3234). Available here

Abstract

The public debate and discussion about trust in Computational Intelligence (CI) systems is not new, but a topic that has seen a recent rise. This is mainly due to the explosion of technological innovations that have been brought to the attention of the public, from lab to reality usually through media reporting. This growth in the public attention was further compounded by the 2018 GDPR legislation and new laws regarding the right to explainable systems, such as the use of "accurate data", "clear logic" and the "use of appropriate mathematical and statistical procedures for profiling". Therefore, trust is not just a topic for debate - it must be addressed from the onset, through the selection of fundamental machine learning processes that are used to create models embedded within autonomous decision-making systems, to the selection of training, validation and testing data. This paper presents current work on trust in the field of Computational Intelligence systems and discusses the legal framework we should ascribe to trust in CI systems. A case study examining current public perceptions of recent CI inspired technologies which took part at a national science festival is presented with some surprising results. Finally, we look at current research underway that is aiming to increase trust in Computational Intelligent systems and we identify a clear educational gap.

DOI

10.1109/CEC.2019.8790147

Access Rights

subscription content

Share

 
COinS