Reliability Displays for Trust Calibration in AI-based Systems

Trust in an automated system is characterized by the expectation that it will support a person in a situation characterized by uncertainty and vulnerability. It is, therefore, important to know in which situation one should rely on an intelligent function and when not to do so. If the reliability of the intelligent function is underestimated or overestimated, i.e. if it is not “calibrated” well enough, this leads to distrust or overtrust, and incorrect interaction with the system as a result.

Reliability displays are intended to provide information on the reliability of intelligent system functions and thus to align expectations with actual system capabilities. They provide the user with an opportunity to adapt their own acceptance- and trust-related attitude to a system function.

The aim of CALIBRaiTE is to join trust calibration via reliability displays and AI-based systems by apply this approach to predictive systems in the Building Information Modeling (BIM) context. In the project, a predictive BIM-UI is expanded with reliability indicators based on reliability requirements from domain experts. The indicators will undergo an evaluation and the resulting solutions will be captured as Design Patterns for reliability indicators.

Duration: 02/2020 – 01/2021

Funding: CALIBRaiTE is an exploratory project funded by the FFG via the IdeenLab 4.0 programme (Project-Nr. 878786).

Partners: CALIBRaiTE is a collaboration between the Center for HCI, The AIT Austrian Institute of Technology (Center for Technology Experience and Center for Digital Safety & Security), and BOC Asset Management GmbH.

Contact: Alexander Mirnig

Team